AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Rodent Arena Multi-View Monitor (RAMM): A Camera Synchronized Photographic Control System for Multi-View Rodent Monitoring

School of Computer Science and Engineering, Central South University, Changsha 410083, China
Xiangya Hospital, Central South University, Changsha 410083, China
Show Author Information

Abstract

Although multi-view monitoring techniques have been widely applied in skinned model reconstruction and movement analysis, traditional systems using high-performance Personal Computers (PCs), or industrial cameras are often prohibitive due to high costs and limited scalability. Here, we introduce an affordable, scalable multi-view image acquisition system for skinned model reconstruction in animal studies, utilizing consumer Android devices and a wireless network for synchronized monitoring named Rodent Arena Multi-View Monitor (RAMM). It uses smartphones as camera nodes with local data storage, enabling cost-effective scalability. Its custom synchronization solution and portability make it ideal for research and education in rodent behavior analysis, offering a practical alternative for institutions with limited budgets. Furthermore, the portability and flexibility of this system make it an ideal tool for rodent skinned model research based on multi-view image acquisition. To evaluate the performance, we perform an oscilloscope analysis to ensure effectiveness of synchronization. A 45-camera node setup is built to highlight RAMM’s cost efficiency and ease in constructing large-scale systems. Additionally, the data quality is validated using the Instant Neural Graphics Primitives (Instant-NGP) method. Remarkable results were achieved with a 30.49 dB PSNR by utilizing only 25 images with intrinsic and extrinsic parameters, fulfilling the requirements for well-synchronized data used in 3D representation algorithms.

References

[1]

M. W. Mathis and A. Mathis, Deep learning tools for the measurement of animal behavior in neuroscience, Current Opinion in Neurobiology., vol. 60, pp. 1–11, 2020.

[2]

K. Huang, Y. Wang, H. Ning, T. Wan, C. Guo, Y. Wu, and J. Pei, A hierarchical 3D-motion learning framework for animal spontaneous behavior mapping, Nature Communications., vol. 12, no. 1, p. 2784, 2021.

[3]

T. D. Pereira, D. E. Aldarondo, L. Willmore, M. Kislin, S. S. H. Wang, M. Murthy, and J. W. Shaevitz, Fast animal pose estimation using deep neural networks, Nature Methods., vol. 16, no. 1, pp. 117–125, 2019.

[4]
T. D. Pereira, N. Tabris, J. Li, S. Ravindranath, E. Papadoyannis, Z. Y. Wang, D. M. Turner, G. McKenzie-Smith, S. D. Kocher, A. L. Falkner, J. W. Shaevitz, and M. Murthy, SLEAP: A deep learning system for multi-animal pose tracking, Nature Methods., vol. 19, no. 4, pp. 486-495, 2022.
[5]
S. Zuffi, A. Kanazawa, D. Jacobs, and M. J. Black, 3D menagerie: Modeling the 3D shape and pose of animals, in Proc. the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 6365-6373, 2017.
[6]
G. Yang, X. Huang, Z. Hao, M. Y. Liu, S. Belongie, and B. Hariharan, BANMO: Building animatable 3D neural models from many casual videos, in Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp. 2863-2873, 2022.
[7]
R. M. Sousa, A. Wägli, M. Klaper, and P. Seitz, Multi-camera synchronization core implemented on USB3 based FPGA platform, in Proc. Image Sensors and Imaging Systems 2015, SPIE, vol. 9403, San Francisco, CA, USA, p. 94030F, 2015.
[8]

W. Yan, Design of laser camera synchronization trigger for GJ-6 track inspection system, China Railway Science., vol. 39, no. 4, pp. 139–144, 2018.

[9]
B. S. Huang, J. L. Zhu, Z. Y. Cheng, and Y. H. Chen, Multi-camera video synchronization based on feature point matching and refinement, in Proc. the 2019 IEEE/ACIS 18th International Conference on Computer and Information Science (ICIS), Beijing, China, pp. 320-325, 2019.
[10]

X. Wang, X. D. Zhang, and Y. Q. Zhao, Synchronization of video sequences through 3D trajectory reconstruction, Zidonghua Xuebao/Acta Automatica Sinica., vol. 43, no. 10, pp. 1759–1772, 2017.

[11]
H. Kim, M. Ishikawa, and Y. Yamakawa, Reference broadcast frame synchronization for distributed high-speed camera network, in Proc. the 2018 IEEE Sensors Applications Symposium (SAS), Seoul, Republic of South Korea, pp. 1−6, 2018.
[12]
S. Zhou, S. Ma, W. Xiao, Z. Li, and H. Yan, Accurate camera synchronization using deep-shallow mixed models, in Proc. the 2019 IEEE 4th Int. Conf. on Image, Vision and Computing (ICIVC), Xiamen, China, pp. 119−123, 2019.
[13]

B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, NeRF: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM., vol. 65, no. 1, pp. 99–106, 2021.

[14]
S. Bultmann and S. Behnke, Real-time multi-view 3D human pose estimation using semantic feedback to smart edge sensors, arXiv preprint arXiv:2106.14729, 2021.
[15]
G. Omotara, S. Garstang, L. Brown, and J. K. Staveley, High-throughput and accurate 3D scanning of cattle using time-of-flight sensors and deep learning, bioRxiv, 2023.
[16]

T. Müller, A. Evans, C. Schied, and A. Keller, Instant neural graphics primitives with a multiresolution hash encoding, ACM Transactions on Graphics., vol. 41, no. 4, pp. 1–15, 2022.

[17]

X. X. Lu, A review of solutions for perspective-n-point problem in camera pose estimation, Journal of Physics: Conference Series., vol. 1087, no. 5, p. 052009, 2018.

[18]

M. A. Fischler and R. C. Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM., vol. 24, no. 6, pp. 381–395, 1981.

[19]
J. L. Schonberger and J. M. Frahm, Structure-from-motion revisited, in Proc. the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp. 4104−4113, 2016.
[20]
C. Griwodz, S. Gasparini, L. Calvet, P. Gurdjos, F. Castan, B. Maujean, G. De Lillo, and Y. Lanthony, AliceVision Meshroom: An open-source 3D reconstruction pipeline, in Proc. the 12th ACM Multimedia Systems Conference, Istanbul, Turkey, pp. 241−247, 2021.
[21]
J. R. Over, J. A. Ritchie, J. Kranenburg, C. E. Brown, J. G. Briers, M. J. Sixsmith, V. R. Bailly, and C. J. Hapke, Processing coastal imagery with Agisoft Metashape Professional Edition, version 1.6—Structure from motion workflow documentation, US Geological Survey, Open-File Report 2021-1039, 2021.
[22]
RealityCapture, RealityCapture reconstruction software, https://www.capturingreality.com/Product, 2023.
[23]
G. Zeng, S. Paris, L. Quan, and F. Sillion, Study of volumetric methods for face reconstruction, in Proc. IEEE Intelligent Automation Conference, Hong Kong, China, pp. 870−875, 2003.
[24]

H. Shim, J. Luo, and T. Chen, A subspace model-based approach to face relighting under unknown lighting and poses, IEEE Transactions on Image Processing, vol. 17, no. 8, pp. 1331–1341, 2008.

[25]

Y. Furukawa and C. Hernández, Multi-view stereo: A tutorial, Foundations and Trends in Computer Graphics and Vision., vol. 9, nos. 1&2, pp. 1–148, 2015.

[26]

U. Sara, M. Akter, and M. S. Uddin, Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study, Journal of Computer and Communications., vol. 7, no. 3, pp. 8–18, 2019.

[27]

T. Lindeberg, Scale invariant feature transform, Scholarpedia., vol. 7, no. 5, p. 10491, 2012.

[28]
H. Bay, T. Tuytelaars, and L. Van Gool, SURF: Speeded up robust features, in Proc. Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, pp. 404-417, 2006.
[29]
S. Shin and J. Park, Binary radiance fields, in Proc. Advances in Neural Information Processing Systems 36, Red Hook, NY, USA, pp. 55919−55931.
Tsinghua Science and Technology
Pages 2195-2214
Cite this article:
Liu B, Qian Y, Wang J. Rodent Arena Multi-View Monitor (RAMM): A Camera Synchronized Photographic Control System for Multi-View Rodent Monitoring. Tsinghua Science and Technology, 2025, 30(5): 2195-2214. https://doi.org/10.26599/TST.2024.9010117

84

Views

8

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 18 March 2024
Revised: 30 May 2024
Accepted: 24 June 2024
Published: 29 April 2025
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return