Journal Home > Volume 4 , Issue 3

Abstract  Detecting small objects is a challenging task. We focus on a special case: the detection and classification of traffic signals in street views. We present a novel framework that utilizes a visual attention model to make detection more efficient, without loss of accuracy, and which generalizes. The attention model is designed to generate a small set of candidate regions at a suitable scale so that small targets can be better located and classified. In order to evaluate our method in the context of traffic signal detection, we have built a traffic light benchmark with over 15,000 traffic light instances, based on Tencent street view panoramas. We have tested our method both on the dataset we have built and the Tsinghua–Tencent 100K (TT100K) traffic sign benchmark. Experiments show that our method has superior detection performance and is quicker than the general faster RCNN object detection framework on both datasets. It is competitive with state-of-the-art specialist traffic sign detectors on TT100K, but is an order of magnitude faster. To show generality, we tested it on the LISA dataset without tuning, and obtained an average precision in excess of 90%.


menu
Abstract
Full text
Outline
About this article

Traffic signal detection and classification in street views using an attention model

Show Author's information Yifan Lu1Jiaming Lu1( )Songhai Zhang1Peter Hall2
TNList, Tsinghua University, Beijing 100084, China.
Department of Computer Science, University of Bath, United Kingdom.

Abstract

Abstract  Detecting small objects is a challenging task. We focus on a special case: the detection and classification of traffic signals in street views. We present a novel framework that utilizes a visual attention model to make detection more efficient, without loss of accuracy, and which generalizes. The attention model is designed to generate a small set of candidate regions at a suitable scale so that small targets can be better located and classified. In order to evaluate our method in the context of traffic signal detection, we have built a traffic light benchmark with over 15,000 traffic light instances, based on Tencent street view panoramas. We have tested our method both on the dataset we have built and the Tsinghua–Tencent 100K (TT100K) traffic sign benchmark. Experiments show that our method has superior detection performance and is quicker than the general faster RCNN object detection framework on both datasets. It is competitive with state-of-the-art specialist traffic sign detectors on TT100K, but is an order of magnitude faster. To show generality, we tested it on the LISA dataset without tuning, and obtained an average precision in excess of 90%.

Keywords: Keywords  traffic light detection, traffic light benchmark, small object detection, CNN

References(30)

[1]
Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In: Proceedings of the Advances in Neural Information Processing Systems, 91–99, 2015.
[2]
Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A. C. SSD: Single shot multibox detector. In: Computer Vision–ECCV 2016. Lecture Notes in Computer Science, Vol. 9905. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 21–37, 2016.
DOI
[3]
Chen, C.; Liu, M.-Y.; Tuzel, O.; Xiao, J. R-CNN for small object detection. In: Computer Vision–ACCV 2016. Lecture Notes in Computer Science, Vol. 10115. Lai, S. H.; Lepetit, V.; Nishino, K.; Sato, Y. Eds. Springer Cham, 214–230, 2016.
[4]
Jin, J.; Fu, K.; Zhang, C. Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Transactions on Intelligent Transportation Systems Vol. 15, No. 5, 1991–2000, 2014.
[5]
Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-sign detection and classification in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2110–2118, 2016.
DOI
[6]
Girshick, R. Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 1440–1448, 2015.
DOI
[7]
Rensink, R. A. The dynamic representation of scenes. Visual Cognition Vol. 7, Nos. 1–3, 17–42, 2000.
[8]
Jensen, M. B.; Philipsen, M. P.; Møgelmose, A.; Moeslund, T. B.; Trivedi., M. M. Vision for looking at traffic lights: Issues, survey, and perspectives. IEEE Transactions on Intelligent Transportation Systems Vol. 17, No. 7, 1800–1815, 2016.
[9]
Diaz, M.; Cerri, P.; Pirlo, G.; Ferrer, M. A.; Impedovo, D. A survey on traffic light detection. In: New Trends in Image Analysis and Processing–ICIAP 2015 Workshops. Lecture Notes in Computer Science, Vol. 9281. Murino, V.; Puppo, E.; Sona, D.; Cristani, M.; Sansone, C. Eds. Springer Cham, 201–208, 2015.
DOI
[10]
Maldonado-Bascon, S.; Lafuente-Arroyo, S.; Gil-Jimenez, P.; Gomez-Moreno, H.; Lopez-Ferreras, F. Road-sign detection and recognition based on support vector machines. IEEE Transactions on Intelligent Transportation Systems Vol. 8, No. 2, 264–278, 2007.
[11]
Jang, C.; Kim, C.; Kim, D.; Lee, M.; Sunwoo, M. Multiple exposure images based traffic light recognition. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 1313–1318, 2014.
DOI
[12]
De Charette, R.; Nashashibi, F. Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 358–363, 2009.
DOI
[13]
Cai, Z.; Gu, M.; Li, Y. Real-time arrow traffic light recognition system for intelligent vehicle. In: Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition, 1, 2012.
[14]
Sooksatra, S.; Kondo, T. Red traffic light detection using fast radial symmetry transform. In: Proceedings of the 11th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, 1–6, 2014.
DOI
[15]
Ji, Y.; Yang, M.; Lu, Z.; Wang, C. Integrating visual selective attention model with HOG features for traffic light detection and recognition. In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV), 280–285, 2015.
DOI
[16]
Fairfield, N.; Urmson, C. Traffic light mapping and detection. In: Proceedings of the IEEE International Conference on Robotics and Automation, 5421–5426, 2011.
DOI
[17]
John, V.; Yoneda, K.; Qi, B.; Liu, Z.; Mita, S. Traffic light recognition in varying illumination using deep learning and saliency map. In: Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems, 2286–2291, 2014.
DOI
[18]
John, V.; Yoneda, K.; Liu, Z.; Mita, S. Saliency map generation by the convolutional neural network for real-time traffic light detection using template matching. IEEE Transactions on Computational Imaging Vol. 1, No. 3, 159–173, 2015.
[19]
Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013.
[20]
Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 580–587, 2014.
DOI
[21]
He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Computer Vision–ECCV 2014. Lecture Notes in Computer Science, Vol. 8691. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 346–361, 2014.
[22]
Uijlings, J. R.; van de Sande, K. E. A.; Gevers, T.; Smeulders, A. W. Selective search for object recognition. International Journal of Computer Vision Vol. 104, No. 2, 154–171, 2013.
[23]
Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779–788, 2016.
DOI
[24]
Mnih, V.; Heess, N.; Graves, A.; kavukcuoglu, k. Recurrent models of visual attention. In: Proceedings of the Advances in Neural Information Processing Systems, 2204–2212, 2014.
[25]
Ba, J.; Mnih, V.; Kavukcuoglu, K. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.
[26]
Huang, W.; He, D.; Yang, X.; Zhou, Z.; Kifer, D.; Giles, C. L. Detecting arbitrary oriented text in the wild with a visual attention model. In: Proceedings of the ACM on Multimedia Conference, 551–555, 2016.
DOI
[27]
Gidaris, S.; Komodakis, N. Attend refine repeat: Active box proposal generation via in-out localization. arXiv preprint arXiv:1606.04446, 2016.
DOI
[28]
Gidaris, S.; Komodakis, N. Object detection via a multi-region and semantic segmentation-aware CNN model. In: Proceedings of the IEEE International Conference on Computer Vision, 1134–1142, 2015.
DOI
[29]
Zeiler, M. D.; Fergus, R. Visualizing and understanding convolutional networks. In: Computer Vision–ECCV 2014. Lecture Notes in Computer Science, Vol. 8689. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 818–833, 2014.
DOI
[30]
He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, 1026–1034, 2015.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Revised: 09 March 2018
Accepted: 07 April 2018
Published: 04 August 2018
Issue date: September 2018

Copyright

© The Author(s) 2018

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61772298), Research Grant of Beijing Higher Institution Engineering Research Center, and the Tsinghua–Tencent Joint Laboratory for Internet Innovation Technology.

Rights and permissions

This article is published with open access at Springerlink.com

The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return