Journal Home > Volume 6 , Issue 4

Prediction of the likely evolution of trafficscenes is a challenging task because of high uncertaintiesfrom sensing technology and the dynamic environment. It leads to failure of motion planning for intelligent agents like autonomous vehicles. In this paper, we propose a fluid-inspired model to estimate collision risk in road scenes. Multi-object states are detected and tracked, and then a stable fluid model is adopted to construct the risk field. Objects’ state spaces are used as the boundary conditions in the simulation of advection and diffusion processes. We have evaluated our approach on the public KITTI dataset; our modelcan provide predictions in the cases of misdetection and tracking error caused by occlusion. It proves a promising approach for collision risk assessment in road scenes.


menu
Abstract
Full text
Outline
About this article

Fluid-inspired field representation for risk assessment in road scenes

Show Author's information Xuanpeng Li1Lifeng Zhu1( )Qifan Xue1Dong Wang1Yongjie Jessica Zhang2
School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
Department of Mechanical Engineering, CarnegieMellon University, Pittsburgh, PA 15213, USA

Abstract

Prediction of the likely evolution of trafficscenes is a challenging task because of high uncertaintiesfrom sensing technology and the dynamic environment. It leads to failure of motion planning for intelligent agents like autonomous vehicles. In this paper, we propose a fluid-inspired model to estimate collision risk in road scenes. Multi-object states are detected and tracked, and then a stable fluid model is adopted to construct the risk field. Objects’ state spaces are used as the boundary conditions in the simulation of advection and diffusion processes. We have evaluated our approach on the public KITTI dataset; our modelcan provide predictions in the cases of misdetection and tracking error caused by occlusion. It proves a promising approach for collision risk assessment in road scenes.

Keywords: fluid-inspired risk field, multi-object tracking, road scenes

References(40)

[1]
C. Laugier,; I. E. Paromtchik,; M. Perrollaz,; M. Yong,; J. Yoder,; C. Tay,; K. Mekhnacha,; A. Nègre, Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety. IEEE Intelligent Transportation Systems Magazine Vol. 3, No. 4, 4-19, 2011.
[2]
S. Lefèvre,; D. Vasquez,; C. Laugier, A survey on motion prediction and risk assessment for intelligent vehicles. ROBOMECH Journal Vol. 1, No. 1, 1-14, 2014.
[3]
M. Lee,; M. Sunwoo,; K. Jo, Collision risk assessment of occluded vehicle based on the motion predictions using the precise road map. Robotics and Autonomous Systems Vol. 106, 179-191, 2018.
[4]
C. Coué,; C. Pradalier,; C. Laugier,; T. Fraichard,; P. Bessière, Bayesian occupancy filtering for multitarget tracking: An automotive application. The International Journal of Robotics Research Vol. 25, No. 1, 19-30, 2006.
[5]
T. N. Nguyen,; B. Michaelis,; A. Al-Hamadi,; M. Tornow,; M. M. Meinecke, Stereo-camera-based urban environment perception using occupancy grid and object tracking. IEEE Transactions on Intelligent Transportation Systems Vol. 13, No. 1, 154-165, 2012.
[6]
K. Lee,; D. Kum, Collision avoidance/mitigation system: Motion planning of autonomous vehicle via predictive occupancy map. IEEE Access Vol. 7, 52846-52857, 2019.
[7]
J. Hamrick,; P. Battaglia,; J. B. Tenenbaum, Internal physics models guide probabilistic judgments about object dynamics. In: Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Vol. 2, 2011.
[8]
Z. S. Yang,; Y. Yu,; D. X. Yu,; H. X. Zhou,; X. L. Mo, APF-based car following behavior considering lateral distance. Advances in Mechanical Engineering Vol. 5, 207104, 2013.
[9]
J. Q. Wang,; J. Wu,; Y. Li, The driving safety field based on driver-vehicle-road interactions. IEEE Transactions on Intelligent Transportation Systems Vol. 16, No. 4, 2203-2214, 2015.
[10]
R. Villegas,; J. Yang,; Y. Zou,; S. Sohn,; X. Lin,; H. Lee, Learning to generate long-term future via hierarchical prediction. In: Proceedings of the IEEE International Conference on Machine Learning, 3560-3569, 2017.
[11]
J. Li,; H. Ma,; W. Zhan,; M. Tomizuka, Generic probabilistic interactive situation recognition and prediction: From virtual to real. In: Proceedings of the IEEE International Conference on Intelligent Transportation Systems, 3218-3224, 2018.
[12]
A. J. Chorin,; J. E. Marsden, A Mathematical Introduction to Fluid Mechanics. New York: Springer, 1990.
[13]
M. Simon,; S. Milz,; K. Amende,; H. M. Gross, Complex-YOLO: An Euler-region-proposal for real-time 3D object detection on point clouds. In: Computer Vision - ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11129. L. Leal-Taixé,; S. Roth, Eds. Springer Cham, 197-209, 2019.
[14]
J. Beltran,; C. Guindel,; F. M. Moreno,; D. Cruzado,; F. Garcia,; A. de La Escalera, BirdNet: A 3D object detection framework from LiDAR information. In: Proceedings of the 21st International Conference on Intelligent Transportation Systems, 3517-3523, 2018.
[15]
B. Li, 3D fully convolutional network for vehicle detection in point cloud. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 1513-1518, 2017.
[16]
M. Engelcke,; D. Rao,; D. Z. Wang,; C. H. Tong,; I. Posner, Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks. In: Proceedings of the IEEE International Conference on Robotics and Automation, 1355-1361, 2017.
[17]
Y. Zhou,; O. Tuzel, VoxelNet: End-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4490-4499, 2018.
[18]
X. Chen,; H. Ma,; J. Wan,; B. Li,; T. Xia, Multi-view 3D object detection network for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6526-6534, 2017.
[19]
J. Ku,; M. Mozifian,; J. Lee,; A. Harakeh,; S. L. Waslander, Joint 3D proposal generation and object detection from view aggregation. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 1-8, 2018.
[20]
A. A. Butt; R. T. Collins, Multi-target tracking by Lagrangian relaxation to min-cost network flow. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1846-1853, 2013.
[21]
C. Kuo,; C. Huang,; R. Nevatia, Multi-target tracking by on-line learned discriminative appearance models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 685-692, 2010.
[22]
S.-H. Bae; K.-J. Yoon, Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1218-1225, 2014.
[23]
J. H. Yoon,; C. Lee,; M. Yang,; K. Yoon, Online multi-object tracking via structural constraint event aggregation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1392-1400, 2016.
[24]
L. Leal-Taixé,; C. Canton-Ferrer,; K. Schindler, Learning by tracking: Siamese CNN for robust target association. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 33-40,2016.
[25]
S. Y. Tang,; B. Andres,; M. Andriluka,; B. Schiele, Multi-person tracking by multicut and deep matching. In: Computer Vision - ECCV 2016 Workshops. Lecture Notes in Computer Science, Vol. 9914. G. Hua,; H. Jégou, Eds. Springer Cham, 100-111, 2016.
[26]
S. Park,; K. Lee,; K. Yoon, Robust online multiple object tracking based on the confidence-based relative motion network and correlation filter. In: Proceedings of the IEEE International Conference on Image Processing, 3484-3488, 2016.
[27]
Y. Xiang,; A. Alahi,; S. Savarese, Learning to track: Online multi-object tracking by decision making. In: Proceedings of the IEEE International Conference on Computer Vision, 4705-4713, 2015.
[28]
J. V. Dueholm,; M. S. Kristoffersen,; R. K. Satzoda,; T. B. Moeslund,; M. M. Trivedi, Trajectories and maneuvers of surrounding vehicles with panoramic camera arrays. IEEE Transactions on Intelligent Vehicles Vol. 1, No. 2, 203-214, 2016.
[29]
G. T. Xie,; H. B. Gao,; L. J. Qian,; B. Huang,; K. Q. Li,; J. Q. Wang, Vehicle trajectory prediction by integrating physics- and maneuver-based approaches using interactive multiple models. IEEE Transactions on Industrial Electronics Vol. 65, No. 7, 5999-6008, 2018.
[30]
N. Deo,; M. M. Trivedi, Multi-modal trajectory prediction of surrounding vehicles with maneuver based LSTMs. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 1179-1184, 2018.
[31]
J. Schulz,; C. Hubmann,; J. Löchner,; D. Burschka, Interaction-aware probabilistic behavior prediction in urban environments. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 3999-4006, 2018.
[32]
J. Li,; H. Ma,; M. Tomizuka, Interaction-aware multi-agent tracking and probabilistic behavior prediction via adversarial learning. In: Proceedings of the IEEE International Conference on Robotics and Automation, 6658-6664, 2019.
[33]
D. Reichardt,; J. Shick, Collision avoidance in dynamic environments applied to autonomous vehicle guidance on the motorway. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 74-78, 1994.
[34]
M. T. Wolf,; J. W. Burdick, Artificial potential functions for highway driving with collision avoidance. In: Proceedings of the IEEE International Conference on Robotics and Automation, 3731-3736, 2008.
[35]
K. Kim,; B. Kim,; K. Lee,; B. Ko,; K. Yi, Design of integrated risk management-based dynamic driving control of automated vehicles. IEEE Intelligent Transportation Systems Magazine Vol. 9, No. 1, 57-73, 2017.
[36]
J. Q. Wang,; J. Wu,; X. J. Zheng,; D. H. Ni,; K. Q. Li, Driving safety field theory modeling and its application in pre-collision warning system. Transportation Research Part C: Emerging Technologies Vol. 72, 306-324, 2016.
[37]
L. Zhu,; X. Li,; W. Lu,; Y. J. Zhang, A field-based representation of surrounding vehicle motion from a monocular camera. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 1761-1766, 2018.
[38]
S. Shi,; X. Wang,; H. Li, PointRCNN: 3D object proposal generation and detection from point cloud. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-779, 2019.
[39]
H. W. Kuhn, The Hungarian method for the assignment problem. Naval Research Logistics Quarterly Vol. 2, Nos. 1-2, 83-97, 1955.
[40]
J. Stam, Stable fluids. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 121-128, 1999.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 01 June 2020
Accepted: 21 July 2020
Published: 29 October 2020
Issue date: December 2020

Copyright

© The Author(s) 2020

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant No. 61906038, the Fundamental Research Funds for the Central Universities under Grant No. 2242019K40039, and the Zhishan Youth Scholar Program of Southeast University.

Rights and permissions

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return