Journal Home > Volume 5 , Issue 4

In this paper, we propose a simple but effective framework for lane boundary detection, called SpinNet. Considering that cars or pedestrians often occlude lane boundaries and that the local features of lane boundaries are not distinctive, therefore, analyzing and collecting global context information is crucial for lane boundary detection. To this end, we design a novel spinning convolution layer and a brand-new lane parameterization branch in our network to detect lane boundaries from a global perspective. To extract features in narrow strip-shaped fields, we adopt strip-shaped convolutions with kernels which have 1×n or n×1 shape in the spinning convolution layer. To tackle the problem of that straight strip-shaped convolutions are only able to extract features in vertical or horizontal directions, we introduce the concept of feature map rotation to allow the convolutions to be applied in multiple directions so that more information can be collected concerning a whole lane boundary. Moreover, unlike most existing lane boundary detectors, which extract lane boundaries from segmentation masks, our lane boundary parameterization branch predicts a curve expression for the lane boundary for each pixel in the output feature map. And the network utilizes this information to predict the weights of the curve, to better form the final lane boundaries. Our framework is easy to implement and end-to-end trainable. Experiments show that our proposed SpinNet outperforms state-of-the-art methods.


menu
Abstract
Full text
Outline
About this article

SpinNet: Spinning convolutional network for lane boundary detection

Show Author's information Ruochen Fan1Xuanrun Wang1Qibin Hou2Hanchao Liu1Tai-Jiang Mu1( )
Department of Computer Science, Purdue University, 305 N. University Street, West Lafayette, IN 47907, USA.
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
College of Information Sciences and Technology, Penn State University, University Park, PA 16802, USA.
Department of Radiology, Duke University, Durham, NC 27705, USA.
College of Computer Science and Technology,Zhejiang University, Hangzhou 310007, China.

Abstract

In this paper, we propose a simple but effective framework for lane boundary detection, called SpinNet. Considering that cars or pedestrians often occlude lane boundaries and that the local features of lane boundaries are not distinctive, therefore, analyzing and collecting global context information is crucial for lane boundary detection. To this end, we design a novel spinning convolution layer and a brand-new lane parameterization branch in our network to detect lane boundaries from a global perspective. To extract features in narrow strip-shaped fields, we adopt strip-shaped convolutions with kernels which have 1×n or n×1 shape in the spinning convolution layer. To tackle the problem of that straight strip-shaped convolutions are only able to extract features in vertical or horizontal directions, we introduce the concept of feature map rotation to allow the convolutions to be applied in multiple directions so that more information can be collected concerning a whole lane boundary. Moreover, unlike most existing lane boundary detectors, which extract lane boundaries from segmentation masks, our lane boundary parameterization branch predicts a curve expression for the lane boundary for each pixel in the output feature map. And the network utilizes this information to predict the weights of the curve, to better form the final lane boundaries. Our framework is easy to implement and end-to-end trainable. Experiments show that our proposed SpinNet outperforms state-of-the-art methods.

Keywords: deep learning, autonomous driving, object detection, lane boundary detection

References(35)

[1]
R. Girshick,Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 1440-1448, 2015.
[2]
S. Ren,; K. He,; R. Girshick,; J. Sun,Faster RCNN: Towards real-time object detection with region proposal networks. In: Proceedings of the Advances in Neural Information Processing Systems 28, 91-99, 2015.
[3]
K. M. He,; G. Gkioxari,; P. Dollár,; R. Girshick,Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 2961-2969, 2017.
[4]
X. Pan,; J. Shi,; P. Luo,; X. Wang,; X. Tang,Spatial as deep: Spatial CNN for traffic scene understanding. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 7276-7283, 2018.
[5]
Z. C. Lipton,; J. Berkowitz,; C Elkan,. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019, 2015.
[6]
M. Aly,Real time detection of lane markers in urban streets. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 7-12, 2008.
[7]
A. Bar Hillel,; R. Lerner,; D. Levi,; G. Raz, Recent progress in road and lane detection: A survey. Machine Vision and Applications Vol. 25, No. 3, 727-745, 2014.
[8]
J. Son,; H. Yoo,; S. Kim,; K. Sohn, Real-time illumination invariant lane detection for lane departure warning system. Expert Systems with Applications Vol. 42, No. 4, 1816-1824, 2015.
[9]
S. Jung,; J. Youn,; S. Sull, Efficient lane detection based on spatiotemporal images. IEEE Transactions on Intelligent Transportation Systems Vol. 17, No. 1, 289-295, 2016.
[10]
A. Borkar,; M. Hayes,; M. T. Smith, A novel lane detection system with efficient ground truth generation. IEEE Transactions on Intelligent Transportation Systems Vol. 13, No. 1, 365-374, 2012.
[11]
H. Loose,; U. Franke,; C. Stiller,Kalman Particle Filter for lane recognition on rural roads. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 60-65, 2009.
[12]
K. Y. Chiu,; S. F. Lin,Lane detection using color-based segmentation. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 706-711, 2005.
[13]
Z. Teng,; J.-H. Kim,; D.-J. Kang,Real-time lane detection by using multiple cues. In: Proceedings of the International Conference on Control, Automation and Systems, 2334-2337, 2010.
[14]
G. L. Liu,; F. Wörgötter,; I. Markelić,Combining statistical Hough transform and particle filter for robust lane detection and tracking. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 993-997, 2010.
[15]
TuSimple. Tusimple dataset. Available at http://cvpr2017.tusimple.ai/.
[16]
J. Long,; E. Shelhamer,; T. Darrell,Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognitionm, 3431-3440, 2015.
[17]
L. C. Chen,; G. Papandreou,; I. Kokkinos,; K. Murphy,; A. L. Yuille, DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 40, No. 4, 834-848, 2018.
[18]
B. Huval,; T. Wang,; S. Tandon,; J. Kiske,; W. Song,; J. Pazhayampallil,; M. Andriluka,; P. Rajpurkar,; T. Migimatsu,; R Cheng-Yue,. et al. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716, 2015.
[19]
Q. Zou,; H. W. Jiang,; Q. Y. Dai,; Y. H. Yue,; L. Chen,; Q. Wang, Robust lane detection from continuous driving scenes using deep neural networks. IEEE Transactions on Vehicular Technology , 2019.
[20]
W. Zhang,; T Mahale,. End to end video segmentation for driving: Lane detection for autonomous car. arXiv preprint arXiv:1812.05914 2018.
[21]
C. H. Quach,; V. L. Tran,; D. H. Nguyen,; V. T. Nguyen,; M. T. Pham,; M. D. Phung,Real-time lane marker detection using template matching with RGB-D camera. In: Proceedings of the 2nd International Conference on Recent Advances in Signal Processing, Telecommunications & Computing, 152-157, 2018.
DOI
[22]
N. Garnett,; R. Cohen,; T. Pe’er,; R. Lahav,; D. Levi,3D-LaneNet: End-to-end 3D multiple lane detection. In: Proceedings of the IEEE International Conference on Computer Vision, 2921-2930, 2019.
[23]
S. Lee,; J. Kim,; J. S. Yoon,; S. Shin,; O. Bailo,; N. Kim,; T.-H. Lee,; H. S. Hong,; S.-H. Han,; I. S. Kweon,VPGNet: Vanishing point guided network for lane and road marking detection and recognition. In: Proceedings of the IEEE International Conference on Computer Vision, 1947-1955, 2017.
[24]
D. Neven,; B. De Brabandere,; S. Georgoulis,; M. Proesmans,; L. V. Gool,Towards end-to-end lane detection: An instance segmentation approach. In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV), 286-291, 2018.
[25]
W. Van Gansbeke,; B. De Brabandere,; D. Neven,; M. Proesmans,; L. Van Gool,End-to-end lane detection through differentiable least-squares fitting. In: Proceedings of the IEEE International Conference on Computer Vision, 2019.
[26]
P. R. Chen,; S. Y. Lo,; H. M. Hang,; S. W. Chan,; J. J. Lin,Efficient road lane marking detection with deep learning. In: Proceedings of the IEEE 23rd International Conference on Digital Signal Processing, 1-5, 2018.
[27]
D. E. Worrall,; S. J. Garbin,; D. Turmukhambetov,; G. J. Brostow,Harmonic networks: Deep translation and rotation equivariance. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5028-5037, 2017.
[28]
Z. Y. Ouyang,; J. J. Feng,; F. Su,; A. N. Cai,Fingerprint matching with rotation-descriptor texture features. In: Proceedings of the 18th International Conference on Pattern Recognition, 417-420, 2006.
[29]
S. Dieleman,; K. W. Willett,; J. Dambre, Rotation-invariant convolutional neural networks for galaxy morphology prediction. Monthly Notices of the Royal Astronomical Society Vol. 450, No. 2, 1441-1459, 2015.
[30]
K. Simonyan,; A Zisserman,. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[31]
M. Abadi,; P. Barham,; J. Chen,; Z. Chen,; A. Davis,; J. Dean,; M. Devin,; S. Ghemawat,; G. Irving,; M. Isard, et al. TensorFlow: A system for large-scale machine learning. In: Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, 265-283, 2016.
[32]
V. Badrinarayanan,; A. Kendall,; R. Cipolla, SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 39, No. 12, 2481-2495, 2017.
[33]
J. Kim,; C. Park,End-to-end ego lane estimation based on sequential transfer learning for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 30-38, 2017.
[34]
J. Zhang,; Y. Xu,; B. B. Ni,; Z. Y. Duan,Geometric constrained joint lane segmentation and lane boundary detection. In: Computer Vision - ECCV 2018. Lecture Notes in Computer Science, Vol. 11205. V. Ferrari,; M. Hebert,; C. Sminchisescu,; Y Weiss,. Eds. Springer International Publishing, 502-518, 2018.
DOI
[35]
D. Liang,; Y. Guo,; S. Zhang,; S.-H. Zhang,; P. Hall,; M. Zhang,; S Hu,. LineNet: A zoomable CNN for crowdsourced high definition maps modeling in urban environments. arXiv preprint arXiv:1807.05696,2018.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Revised: 20 December 2019
Accepted: 24 December 2019
Published: 17 January 2020
Issue date: December 2019

Copyright

© The author(s) 2019

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Project No. 61572264), Research Grant of Beijing Higher Institution Engineering Research Center, and Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology.

Rights and permissions

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return