Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Although multi-view monitoring techniques have been widely applied in skinned model reconstruction and movement analysis, traditional systems using high-performance Personal Computers (PCs), or industrial cameras are often prohibitive due to high costs and limited scalability. Here, we introduce an affordable, scalable multi-view image acquisition system for skinned model reconstruction in animal studies, utilizing consumer Android devices and a wireless network for synchronized monitoring named Rodent Arena Multi-View Monitor (RAMM). It uses smartphones as camera nodes with local data storage, enabling cost-effective scalability. Its custom synchronization solution and portability make it ideal for research and education in rodent behavior analysis, offering a practical alternative for institutions with limited budgets. Furthermore, the portability and flexibility of this system make it an ideal tool for rodent skinned model research based on multi-view image acquisition. To evaluate the performance, we perform an oscilloscope analysis to ensure effectiveness of synchronization. A 45-camera node setup is built to highlight RAMM’s cost efficiency and ease in constructing large-scale systems. Additionally, the data quality is validated using the Instant Neural Graphics Primitives (Instant-NGP) method. Remarkable results were achieved with a 30.49 dB PSNR by utilizing only 25 images with intrinsic and extrinsic parameters, fulfilling the requirements for well-synchronized data used in 3D representation algorithms.
M. W. Mathis and A. Mathis, Deep learning tools for the measurement of animal behavior in neuroscience, Current Opinion in Neurobiology., vol. 60, pp. 1–11, 2020.
K. Huang, Y. Wang, H. Ning, T. Wan, C. Guo, Y. Wu, and J. Pei, A hierarchical 3D-motion learning framework for animal spontaneous behavior mapping, Nature Communications., vol. 12, no. 1, p. 2784, 2021.
T. D. Pereira, D. E. Aldarondo, L. Willmore, M. Kislin, S. S. H. Wang, M. Murthy, and J. W. Shaevitz, Fast animal pose estimation using deep neural networks, Nature Methods., vol. 16, no. 1, pp. 117–125, 2019.
W. Yan, Design of laser camera synchronization trigger for GJ-6 track inspection system, China Railway Science., vol. 39, no. 4, pp. 139–144, 2018.
X. Wang, X. D. Zhang, and Y. Q. Zhao, Synchronization of video sequences through 3D trajectory reconstruction, Zidonghua Xuebao/Acta Automatica Sinica., vol. 43, no. 10, pp. 1759–1772, 2017.
B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, NeRF: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM., vol. 65, no. 1, pp. 99–106, 2021.
T. Müller, A. Evans, C. Schied, and A. Keller, Instant neural graphics primitives with a multiresolution hash encoding, ACM Transactions on Graphics., vol. 41, no. 4, pp. 1–15, 2022.
X. X. Lu, A review of solutions for perspective-n-point problem in camera pose estimation, Journal of Physics: Conference Series., vol. 1087, no. 5, p. 052009, 2018.
M. A. Fischler and R. C. Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM., vol. 24, no. 6, pp. 381–395, 1981.
H. Shim, J. Luo, and T. Chen, A subspace model-based approach to face relighting under unknown lighting and poses, IEEE Transactions on Image Processing, vol. 17, no. 8, pp. 1331–1341, 2008.
Y. Furukawa and C. Hernández, Multi-view stereo: A tutorial, Foundations and Trends in Computer Graphics and Vision., vol. 9, nos. 1&2, pp. 1–148, 2015.
U. Sara, M. Akter, and M. S. Uddin, Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study, Journal of Computer and Communications., vol. 7, no. 3, pp. 8–18, 2019.
T. Lindeberg, Scale invariant feature transform, Scholarpedia., vol. 7, no. 5, p. 10491, 2012.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).