Publications
Sort:
Open Access Just Accepted
RIPES: A Robust Multi-Rodent Pose Estimation Framework Integrating Instance Segmentation and Biometric Features
Tsinghua Science and Technology
Available online: 20 June 2025
Abstract PDF (2.4 MB) Collect
Downloads:22

Accurate pose estimation and tracking of individual rodents in multi-individual video data are crucial for comprehensive behavioral analysis but remain challenging. Existing methods like DeepLabCut and SLEAP, while effective in single-individual scenarios, often struggle with multi-individual settings, particularly under severe occlusions. Although top-down approaches relying on object detection can mitigate some occlusion issues, they frequently require extensive manual labeling. Furthermore, current multi-individual tracking solutions such as idTrackerai, Toxtrac, and EDDSN-MRT provide only limited positional data, failing to deliver precise, full-body pose estimates.

To overcome these limitations, we developed the Rodent Identification and Pose Estimation System (RIPES), a novel framework that integrates instance segmentation with identity-preserving tracking. By isolating individual rodents and accurately estimating their skeletal poses, RIPES maintains robust performance even in complex multi-individual environments with severe occlusions. Validation experiments on public datasets and comparisons against state-of-the-art methods demonstrate RIPES’s superior accuracy in multi-individual pose estimation and tracking.

Beyond technical validation, we applied RIPES to analyze motor activities in an osteoarthritis (OA) mouse model influenced by intermittent fasting (IF). By extracting high-resolution pose and movement metrics from multiple individuals simultaneously, we uncovered significant behavioral differences between IF and control groups. These differences, evident in locomotor patterns and exploratory behaviors, highlight the utility of RIPES in elucidating subtle phenotypic variations within disease models.

Open Access Issue
Rodent Arena Multi-View Monitor (RAMM): A Camera Synchronized Photographic Control System for Multi-View Rodent Monitoring
Tsinghua Science and Technology 2025, 30(5): 2195-2214
Published: 29 April 2025
Abstract PDF (1.2 MB) Collect
Downloads:10

Although multi-view monitoring techniques have been widely applied in skinned model reconstruction and movement analysis, traditional systems using high-performance Personal Computers (PCs), or industrial cameras are often prohibitive due to high costs and limited scalability. Here, we introduce an affordable, scalable multi-view image acquisition system for skinned model reconstruction in animal studies, utilizing consumer Android devices and a wireless network for synchronized monitoring named Rodent Arena Multi-View Monitor (RAMM). It uses smartphones as camera nodes with local data storage, enabling cost-effective scalability. Its custom synchronization solution and portability make it ideal for research and education in rodent behavior analysis, offering a practical alternative for institutions with limited budgets. Furthermore, the portability and flexibility of this system make it an ideal tool for rodent skinned model research based on multi-view image acquisition. To evaluate the performance, we perform an oscilloscope analysis to ensure effectiveness of synchronization. A 45-camera node setup is built to highlight RAMM’s cost efficiency and ease in constructing large-scale systems. Additionally, the data quality is validated using the Instant Neural Graphics Primitives (Instant-NGP) method. Remarkable results were achieved with a 30.49 dB PSNR by utilizing only 25 images with intrinsic and extrinsic parameters, fulfilling the requirements for well-synchronized data used in 3D representation algorithms.

Total 2