AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Camera, LiDAR, and IMU Based Multi-Sensor Fusion SLAM: A Survey

Department of Automation, Tsinghua University, Beijing 100084, China
Show Author Information

Abstract

In recent years, Simultaneous Localization And Mapping (SLAM) technology has prevailed in a wide range of applications, such as autonomous driving, intelligent robots, Augmented Reality (AR), and Virtual Reality (VR). Multi-sensor fusion using the most popular three types of sensors (e.g., visual sensor, LiDAR sensor, and IMU) is becoming ubiquitous in SLAM, in part because of the complementary sensing capabilities and the inevitable shortages (e.g., low precision and long-term drift) of the stand-alone sensor in challenging environments. In this article, we survey thoroughly the research efforts taken in this field and strive to provide a concise but complete review of the related work. Firstly, a brief introduction of the state estimator formation in SLAM is presented. Secondly, the state-of-the-art algorithms of different multi-sensor fusion algorithms are given. Then we analyze the deficiencies associated with the reviewed approaches and formulate some future research considerations. This paper can be considered as a brief guide to newcomers and a comprehensive reference for experienced researchers and engineers to explore new interesting orientations.

References

[1]
R. C. Smith and P. Cheeseman, On the representation and estimation of spatial uncertainty, Int. Journal of Robotics Research, vol. 5, no. 4, pp. 56–68, 1986.
[2]
T. Bailey and H. Durrant-Whyte, Simultaneous localization and mapping (SLAM): Part II, IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 108–117, 2006.
[3]
A. R. Khairuddin, M. S. Talib, and H. Haron, Review on simultaneous localization and mapping (SLAM), in Proc. 5th IEEE Int. Conf. on Control System, Computing and Engineering, Batu Ferringhi, Malaysia, 2015, pp. 85–90.
[4]
C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age, IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
[5]
Z. J. Wang, Y. Wu, and Q. Q. Niu, Multi-sensor fusion in automated driving: A survey, IEEE Access, vol. 8, pp. 2847–2868, 2020.
[6]
M. Chghaf, S. Rodriguez, and A. El Ouardi, Camera, LiDAR and multi-modal SLAM systems for autonomous ground vehicles: A survey, Journal of Intelligent & Robotic Systems, vol. 105, no. 1, pp. 1–35, 2022.
[7]
A. Singandhupe and L. H. Manh, A review of SLAM techniques and security in autonomous driving, in Proc. 3rd IEEE Int. Conf. on Robotic Computing, Naples, Italy, 2019, pp. 602–607.
[8]
D. N. Van and G. W. Kim, Multi-sensor fusion towards VINS: a concise tutorial, survey, framework and challenges, in Proc. IEEE Int. Conf. on Big Data and Smart Computing, Busan, Republic of Korea, 2020, pp. 459–462.
[9]
Y. Alkendi, L. Seneviratne, and Y. Zweiri, State of the art in vision-based localization techniques for autonomous navigation systems, IEEE Access, vol. 9, pp. 76847–76874, 2021.
[10]
G. Huang, Visual-inertial navigation: a concise review, in Proc. IEEE Int. Conf. on Robotics and Automation, Montreal, Canada, 2019, pp. 9572–9582.
[11]
C. Chen, H. Zhu, M. G. Li, and S. Z. You, A review of visual-inertial simultaneous localization and mapping from filtering-based and optimization-based perspectives, Robotics, vol. 7, no. 3, p. 45, 2018.
[12]
M. Servieres, V. Renaudin, A. Dupuis, and N. Antigny, Visual and visual-inertial SLAM: state of the art, classification, and experimental benchmarking, Journal of Sensors, vol. 2021, pp. 1–26, 2021.
[13]
A. Macario Barros, M. Michel, Y. Moline, G. Corre, and F. Carrel, A comprehensive survey of visual SLAM algorithms, Robotics, vol. 11, no. 1, p. 24, 2022.
[14]
C. Debeunne and D. Vivet, A review of visual-LiDAR fusion based simultaneous localization and mapping, Sensors, vol. 20, no. 7, p. 2068, 2020.
[15]
K. L. Li, M. Li, and U. D. Hanebeck, Towards high-performance solid-state-LiDAR-inertial odometry and mapping, IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5167–5174, 2021.
[16]
M. Y. Li and A. I. Mourikis, High-precision, consistent EKF-based visual-inertial odometry, Int. Journal of Robotics Research, vol. 32, no. 6, pp. 690–711, 2013.
[17]
M. Bloesch, S. Omani, M. Hutter, and R. Siegwart, Robust visual inertial odometry using a direct EKF-based approach, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 298–304.
[18]
T. Qin, P. L. Li, and S. J. Shen, VINS-Mono: A robust and versatile monocular visual-inertial state estimator, IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
[19]
A. Tagliabue, J. Tordesillas, X. Y. Cai, A. Santamaria-Navarro, J. P. How, L. Carlone, and A. A. Agha-mohammadi, LION: LiDAR-inertial observability-aware navigator for vision-denied environments, Springer Proceedings in Advanced Robotics, vol. 19, pp. 380–390, 2021.
[20]
H. Y. Ye, Y. Y. Chen, and M. Liu, Tightly coupled 3D LiDAR inertial odometry and mapping, in Proc. IEEE Int. Conf. on Robotics and Automation, Montreal, Canada, 2019, pp. 3144–3150.
[21]
C. Qin, H. Y. Ye, C. E. Pranata, J. Han, S. Y. Zhang, and M. Liu, LINS: A LiDAR-inertial state estimator for robust and efficient navigation, in Proc. IEEE Int. Conf. on Robotics and Automation, Virtual Event, 2020, pp. 8899–8905.
[22]
T. X. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, LIO-SAM: tightly-coupled LiDAR inertial odometry via smoothing and mapping, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Virtual Event, 2020, pp. 5135–5142.
[23]
W. Xu and F. Zhang, FAST-LIO: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter, IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3317–3324, 2021.
[24]
C. Bai, T. Xiao, Y. J. Chen, H. Q. Wang, F. Zhang, and X. Gao, Faster-LIO: lightweight tightly coupled LiDAR-inertial odometry using parallel sparse incremental voxels, IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4861–4868, 2022.
[25]
J. Zhang, M. Kaess, and S. Singh, A real-time method for depth enhanced visual odometry, Autonomous Robots, vol. 41, no. 1, pp. 31–43, 2017.
[26]
Y. S. Shin, Y. S. Park, and A. Kim, DVL-SLAM: sparse depth enhanced direct visual-LiDAR SLAM, Autonomous Robots, vol. 44, no. 2, pp. 115–130, 2020.
[27]
J. Graeter, A. Wilczynski, and M. Lauer, LIMO: LiDAR-monocular visual odometry, in Proc. 25th IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Madrid, Spain, 2018, pp. 7872–7879.
[28]
S. S. Huang, Z. Y. Ma, T. J. Mu, H. B. Fu, and S. M. Hu, LiDAR-monocular visual odometry using point and line features, in Proc. IEEE Int. Conf. on Robotics and Automation, Virtual Event, 2020, pp. 1091–1097.
[29]
C. C. Chou and C. F. Chou, Efficient and accurate tightly-coupled visual-LiDAR SLAM, IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 14509–14523, 2021.
[30]
L. B. Meng, C. Ye, and W. Y. Lin, A tightly coupled monocular visual LiDAR odometry with loop closure, Intelligent Service Robotics, vol. 15, no. 1, pp. 129–141, 2022.
[31]
X. X. Zuo, Y. L. Yang, P. Geneva, J. J. Lv, Y. Liu, G. Q. Huang, and M. Pollefeys, LIC-Fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Virtual Event, 2020, pp. 5112–5119.
[32]
S. B. Zhao, H. R. Zhang, P. Wang, L. Nogueira, and S. Scherer, Super odometry: IMU-centric LiDAR-visual-inertial estimator for challenging environments, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Virtual Event, 2021, pp. 8729–8736.
[33]
T. X. Shan, B. Englot, C. Ratti, and D. Rus, LVI-SAM: tightly-coupled LiDAR-visual-inertial odometry via smoothing and mapping, in Proc. IEEE Int. Conf. on Robotics and Automation, Xi’an, China, 2021, pp. 5692–5698.
[34]
J. R. Lin and F. Zhang, R3LIVE: A robust, real-time, RGB-colored, LiDAR-inertial-visual tightly-coupled state estimation and mapping package, in Proc. IEEE Int. Conf. on Robotics and Automation, Philadelphia, AZ, USA, 2022. pp. 10672–10678.
[35]
A. I. Mourikis and S. I. Roumeliotis, A multi-state constraint Kalman filter for vision-aided inertial navigation, in Proc. IEEE Int. Conf. on Robotics and Automation, Rome, Italy, 2007, pp. 3565–3572.
[36]
M. Y. Li and A. I. Mourikis, Improving the accuracy of EKF-based visual-inertial odometry, in Proc. IEEE Int. Conf. on Robotics and Automation, Saint Paul, MN, USA, 2012, pp. 828–835.
[37]
J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, Consistency analysis and improvement of vision-aided inertial navigation, IEEE Transactions on Robotics, vol. 30, no. 1, pp. 158–176, 2014.
[38]
J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, Camera-IMU-based localization: observability analysis and consistency improvement, Int. Journal of Robotics Research, vol. 33, no. 1, pp. 182–201, 2014.
[39]
G. Q. Huang, M. Kaess, and J. J. Leonard, Towards consistent visual-inertial navigation, in Proc. IEEE Int. Conf. on Robotics and Automation, Hong Kong, China, 2014, pp. 4926–4933.
[40]
G. Q. Huang, K. Eckenhoff, and J. Leonard, Optimal-state-constraint EKF for visual-inertial navigation, in Proc. 12th Int. Symposium on Robotics Research, Sestri Levante, Italy, 2015, pp. 125–139.
[41]
S. Heo and C. G. Park, Consistent EKF-based visual-inertial odometry on matrix Lie group, IEEE Sensors Journal, vol. 18, no. 9, pp. 3780–3788, 2018.
[42]
Z. Huai and G. Q. Huang, Robocentric visual-inertial odometry, in Proc. 25th IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Madrid, Spain, 2018, pp. 6319–6326.
[43]
M. Bloesch, M. Burri, S. Omari, M. Hutter, and R. Siegwart, Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback, Int. Journal of Robotics Research, vol. 36, no. 10, pp. 1053–1072, 2017.
[44]
M. Brossard, S. Bonnabel, and A. Barrau, Invariant Kalman filtering for visual inertial SLAM, in Proc. 21st Int. Conf. on Information Fusion, Cambridge, UK, 2018, pp. 2021–2028.
[45]
K. Z. Wu, T. Zhang, D. Su, S. D. Huang, and G. Dissanayake, An invariant-EKF VINS algorithm for improving consistency, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 1578–1585.
[46]
T. Zhang, K. Z. Wu, J. W. Song, S. D. Huang, and G. Dissanayake, Convergence and consistency analysis for a 3–D invariant-EKF SLAM, IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 733–740, 2017.
[47]
J. Sola, Consistency of the monocular EKF-SLAM algorithm for three different landmark parametrizations, in Proc. IEEE Int. Conf. on Robotics and Automation, Anchorage, AK, USA, 2010, pp. 3513–3518.
[48]
J. Civera, A. J. Davison, and J. M. M. Montiel, Inverse depth parametrization for monocular SLAM, IEEE Transactions on Robotics, vol. 24, no. 5, pp. 932–945, 2008.
[49]
J. Civera, A. J. Davison, and J. M. M. Montiel, Inverse depth to depth conversion for monocular SLAM, in Proc. IEEE Int. Conf. on Robotics and Automation, Rome, Italy, 2007, pp. 2778–2783.
[50]
P. Pinies, T. Lupton, S. Sukkarieh, and J. D. Tardos, Inertial aiding of inverse depth SLAM using a monocular camera, in Proc. IEEE Int. Conf. on Robotics and Automation, Rome, Italy, 2007, pp. 2797–2802.
[51]
M. Kleinert and S. Schleith, Inertial aided monocular SLAM for GPS-denied navigation, in Proc. IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems, Salt Lake City, UT, USA, 2010, pp. 20–25.
[52]
J. H. Kim and S. Sukkarieh, Airborne simultaneous localisation and map building, in Proc. 20th IEEE Int. Conf. on Robotics and Automation, Taipei, China, 2003, pp. 406–411.
[53]
S. Lynen, T. Sattler, M. Bosse, J. Hesch, M. Pollefeys, and R. Siegwart, Get out of my lab: large-scale, real-time visual-inertial localization, in Proc. 11th Conf. on Robotics - Science and Systems, Rome, Italy, 2015, p. 1.
[54]
W. Fang, L. Y. Zheng, H. J. Deng, and H. B. Zhang, Real-time motion tracking for mobile augmented/virtual reality using adaptive visual-inertial fusion, Sensors, vol. 17, no. 5, p. 1037, 2017.
[55]
M. Bryson, M. Johnson-Roberson, and S. Sukkarieh, Airborne smoothing and mapping using vision and inertial sensors, in Proc. IEEE Int. Conf. on Robotics and Automation, Kobe, Japan, 2009, pp. 3143–3148.
[56]
V. Indelman, S. Williams, M. Kaess, and F. Dellaert, Information fusion in navigation systems via factor graph based incremental smoothing, Robotics and Autonomous Systems, vol. 61, no. 8, pp. 721–738, 2013.
[57]
A. Patron-Perez, S. Lovegrove, and G. Sibley, A spline-based trajectory representation for sensor fusion and rolling shutter cameras, Int. Journal of Computer Vision, vol. 113, no. 3, pp. 208–219, 2015.
[58]
T. C. Dong-Si and A. I. Mourikis, Motion tracking with fixed-lag smoothing: algorithm and consistency analysis, in Proc. IEEE Int. Conf. on Robotics and Automation, Shanghai, China, 2011, pp. 5655–5662.
[59]
S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, Keyframe-based visual-inertial odometry using nonlinear optimization, Int. Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2015.
[60]
T. Qin and S. J. Shen, Robust initialization of monocular visual-inertial estimation on aerial robots, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 4225–4232.
[61]
P. L. Li, T. Qin, B. T. Hu, F. Y. Zhu, and S. J. Shen, Monocular visual-inertial state estimation for mobile augmented reality, in Proc. 16th IEEE Int. Conf. on Symposium on Mixed and Augmented Reality, Nantes, France, 2017, pp. 11–21.
[62]
Y. Lin, F. Gao, T. Qin, W. L. Gao, T. B. Liu, W. Wu, Z. F. Yang, and S. J. Shen, Autonomous aerial navigation using monocular visual-inertial fusion, Journal of Field Robotics, vol. 35, no. 1, pp. 23–51, 2018.
[63]
V. Indelman, A. Melim, and F. Dellaert. Incremental light bundle adjustment for robotics navigation, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Tokyo, Japan, 2013, pp. 1952–1959.
[64]
N. Keivan and G. Sibley, Asynchronous adaptive conditioning for visual-inertial SLAM, Int. Journal of Robotics Research, vol. 34, no. 13, pp. 1573–1589, 2015.
[65]
T. Lupton and S. Sukkarieh, Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions, IEEE Transactions on Robotics, vol. 28, no. 1, pp. 61–76, 2012.
[66]
S. J. Shen, N. Michael, and V. Kumar, Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs, in Proc. IEEE Int. Conf. on Robotics and Automation, Seattle, WA, USA, 2015, pp. 5303–5310.
[67]
C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation, in Proc. 11th Conf. on Robotics - Science and Systems, Rome, Italy, 2015, pp. 1–10.
[68]
C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, On-manifold preintegration for real-time visual-inertial odometry, IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1–21, 2017.
[69]
J. Zhang and S. Singh. LOAM: LiDAR odometry and mapping in real-time, Robotics-Science and Systems, vol. 2, no. 9, pp. 1–9, 2014.
[70]
W. Xu, Y. X. Cai, D. J. He, J. R. Lin, and F. Zhang, FAST-LIO2: fast direct LiDAR-inertial odometry, IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053–2073, 2022.
[71]
M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. J. Leonard, and F. Dellaert, iSAM2: incremental smoothing and mapping using the Bayes tree, Int. Journal of Robotics Research, vol. 31, no. 2, pp. 216–235, 2012.
[72]
J. Zhang, M. Kaess, and S. Singh, Real-time depth enhanced monocular odometry, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , Chicago, IL, USA, 2014, pp. 4973–4980.
[73]
X. Liang, H. Y. Chen, Y. J. Li, and Y. H. Liu, Visual laser-SLAM in large-scale indoor environments, in Proc. IEEE Int. Conf. on Robotics and Biomimetics, Qingdao, China, 2016, pp. 19–24.
[74]
Y. S. Shin, Y. S. Park, and A. Kim, Direct visual SLAM using sparse depth for camera-LiDAR system, in Proc. IEEE Int. Conf. on Robotics and Automation, Brisbane, Australia, 2018, pp. 5144–5151.
[75]
P. Tanskanen, T. Naegeli, M. Pollefeys, and O. Hilliges, Semi-direct EKF-based monocular visual-inertial odometry, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , Hamburg, Germany, 2015, pp. 6073–6078.
[76]
M. Yan, J. Z. Wang, J. Li, and C. Zhang, Loose coupling visual-LiDAR odometry by combining VISO2 and LOAM, in Proc. 36th Conf. on Chinese Control Conf. , Dalian, China, 2017, pp. 6841–6846.
[77]
A. Geiger, J. Ziegler, and C. Stiller, StereoScan: dense 3d reconstruction in real-time, in Proc. IEEE Intelligent Vehicles Symposium , Baden-Baden, Germany, 2011, pp. 963–968.
[78]
R. Radmanesh, Z. Y. Wang, V. S. Chipade, G. Tsechpenakis, and D. Panagou, LIV-LAM: LiDAR and visual localization and mapping, in Proc. American Control Conf. , Denver, CO, USA, 2020, pp. 659–664.
[79]
J. Zhang and S. Singh, Visual-LiDAR odometry and mapping: low-drift, robust, and fast, in Proc. IEEE Int. Conf. on Robotics and Automation, Seattle, WA, USA, 2015, pp. 2174–2181.
[80]
K. H. Huang, J. H. Xiao, and C. Stachniss, Accurate direct visual-laser odometry with explicit occlusion handling and plane detection, in Proc. IEEE Int. Conf. on Robotics and Automation, Montreal, Canada, 2019, pp. 1295–1301.
[81]
R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, LSD: a fast line segment detector with a false detection control, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722–732, 2010.
[82]
L. L. Zhang and R. Koch, An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency, Journal of Visual Communication and Image Representation, vol. 24, no. 7, pp. 794–805, 2013.
[83]
Y. Seo and C. C. Chou, A tight coupling of vision-LiDAR measurements for an effective odometry, in Proc. 30th IEEE Intelligent Vehicles Symposium, Paris, France, 2019, pp. 1118–1123.
[84]
W. Wang, J. Liu, C. J. Wang, B. Luo, and C. Zhang, DV-LOAM: direct visual LiDAR odometry and mapping, Remote Sensing, vol. 13, no. 16, p. 3340, 2021.
[85]
W. Z. Shao, S. Vijayarangan, C. Li, and G. Kantor, Stereo visual inertial LiDAR simultaneous localization and mapping, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , Macau, China, 2019, pp. 370–377.
[86]
Z. Y. Wang, J. H. Zhang, S. Y. Chen, C. E. Yuan, J. Q. Zhang, and J. W. Zhang, Robust high accuracy visual-inertial-laser SLAM system, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , Macau, China, 2019, pp. 6636–6641.
[87]
M. Camurri, M. Ramezani, S. Nobili, and M. Fallon, Pronto: a multi-sensor state estimator for legged robots in real-world scenarios, Frontiers in Robotics and Ai, vol. 7, p. 68, 2020.
[88]
S. Khattak, H. Nguyen, F. Mascarich, D. Tung, and K. Alexis, Complementary multi-modal sensor fusion for resilient robot pose estimation in subterranean environments, in Proc. Int. Conf. on Unmanned Aircraft Systems, Athens, Greece, 2020, pp. 1024–1029.
[89]
J. Zhang and S. Singh, Laser-visual-inertial odometry and mapping with high robustness and low drift, Journal of Field Robotics, vol. 35, no. 8, pp. 1242–1264, 2018.
[90]
X. X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Q. Huang, LIC-Fusion: LiDAR-inertial-camera odometry, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , Macau, China, 2019, pp. 5848–5854.
[91]
J. R. Lin, C. R. Zheng, W. Xu, and F. Zhang, R2LIVE: A robust, real-time, LiDAR-inertial-visual tightly-coupled state estimator and mapping, IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7469–7476, 2021.
[92]
C. R. Zheng, Q. Y. Zhu, W. Xu, X. Y. Liu, Q. Z. Guo, and F Zhang, FAST-LIVO: fast and tightly-coupled sparse-direct LiDAR-inertial-visual odometry, in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Kyoto, Japan, 2020, pp. 4003-4009.
[93]
C. Forster, M. Pizzoli, and D. Scaramuzza, SVO: fast semi-direct monocular visual odometry, in Proc. IEEE Int. Conf. on Robotics and Automation, Hong Kong, China, 2014, pp. 15–22.
[94]
J. Kelly and G. S. Sukhatme, Visual-inertial sensor fusion: localization, mapping and sensor-to-sensor self-calibration, Int. Journal of Robotics Research, vol. 30, no. 1, pp. 56–79, 2011.
[95]
E. S. Jones and S. Soatto, Visual-inertial navigation, mapping and localization: a scalable real-time causal approach, Int. Journal of Robotics Research, vol. 30, no. 4, pp. 407–430, 2011.
[96]
S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, and R. Siegwart, Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments, in Proc. IEEE Int. Conf. on Robotics and Automation, Saint Paul, MN, USA, 2012, pp. 957–964.
[97]
L. Heng, G. H. Lee, and M. Pollefeys, Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle, Autonomous Robots, vol. 39, no. 3, pp. 259–277, 2015.
[98]
Z. F. Yang and S. J. Shen, Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration, IEEE Transactions on Automation Science and Engineering, vol. 14, no. 1, pp. 39–51, 2017.
[99]
A. Geiger, F. Moosmann, O. Car, and B. Schuster, Automatic camera and range sensor calibration using a single shot, in Proc. IEEE Int. Conf. on Robotics and Automation, Saint Paul, MN, USA, 2012, pp. 3936–3943.
[100]
G. Pandey, J. R. McBride, S. Savarese, and R. M. Eustice, Automatic extrinsic calibration of vision and LiDAR by maximizing mutual information, Journal of Field Robotics, vol. 32, no. 5, pp. 696–722, 2015.
[101]
S. J. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar. Initialization-free monocular visual-inertial state estimation with application to autonomous MAVs, in 14th Int. Symposium on Experimental Robotics, Marrakech, Morocco, 2014, pp. 211–227.
[102]
A. Martinelli, Closed-form solution of visual-inertial structure from motion, Int. Journal of Computer Vision, vol. 106, no. 2, pp. 138–152, 2014.
[103]
J. Kaiser, A. Martinelli, F. Fontana, and D. Scaramuzza, Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation, IEEE Robotics and Automation Letters, vol. 2, no. 1, pp. 18–25, 2017.
[104]
R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, ORB-SLAM: a versatile and accurate monocular SLAM system, IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
[105]
R. Mur-Artal and J. D. Tardos, Visual-inertial monocular SLAM with map reuse, IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 796–803, 2017.
[106]
J. Cheng, L. Y. Zhang, and Q. H. Chen, An improved initialization method for monocular visual-inertial SLAM, Electronics, vol. 10, no. 24, p. 3063, 2021.
[107]
B. Yathirajam, V. S. Meenakshisundaram, and A. C. Muniyappa, An efficient approach to initialization of visual-inertial navigation system using closed-form solution for autonomous robots, Journal of Intelligent & Robotic Systems, vol. 101, no. 3, pp. 1–21, 2021.
[108]
W. B. Huang, W. W. Wan, and H. Liu, Optimization-based online initialization and calibration of monocular visual-inertial odometry considering spatial-temporal constraints, Sensors, vol. 21, no. 8, p. 2673, 2021.
[109]
W. B. Huang, H. Liu, and W. W. Wan, An online initialization and self-calibration method for stereo visual-inertial odometry, IEEE Transactions on Robotics, vol. 36, no. 4, pp. 1153–1170, 2020.
[110]
L. X. Feng, Initialization improvement and map reuse based on ORBSLAM3, in Proc. 2nd Int. Conf. on Artificial Intelligence and Information Systems, Chongqing, China, 2021, pp. 1–7.
Tsinghua Science and Technology
Pages 415-429
Cite this article:
Zhu J, Li H, Zhang T. Camera, LiDAR, and IMU Based Multi-Sensor Fusion SLAM: A Survey. Tsinghua Science and Technology, 2024, 29(2): 415-429. https://doi.org/10.26599/TST.2023.9010010

3095

Views

1144

Downloads

3

Crossref

1

Web of Science

2

Scopus

0

CSCD

Altmetrics

Received: 17 September 2022
Revised: 18 December 2022
Accepted: 21 February 2023
Published: 22 September 2023
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return