AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research | Open Access

Towards more reliable evaluation in pedestrian detection by rethinking “ignore regions”

Gang Li1,2 Xiang Li3Shanshan Zhang1,2( )Jian Yang1,2
Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
Jiangsu Key Laboratory of Image and Video Understanding for Social Safety, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
VCIP, Nankai University, Tianjing, China
Show Author Information

Abstract

It remains a challenging task to detect pedestrians in crowds and it needs more efforts to understand why the detectors fail. When we perform an error analysis based on the traditional evaluation strategy, we find that it produces many misleading false positives, which in fact cover occluded pedestrians. The reason for this is that we usually have two kinds of annotations in the dataset: regular pedestrians (detection targets) labeled by full-body boxes and ignored pedestrians (NOT detection targets) labeled by visible boxes. Ignored pedestrians are labeled as an additional category termed the “ignore region”. Nevertheless, our detectors always predict a full-body box for each pedestrian. This gap results in the following case: when a detector successfully predicts a full-body box for those ignored pedestrians, a false positive is triggered due to the low overlap between the predicted full-body box and the labeled visible box for the ignored pedestrian. This becomes even more harmful as the detector improves and becomes more capable of locating occluded pedestrians. To alleviate this issue, we devise a new pedestrian detection pipeline, which considers the additional visible box at both the detection and evaluation stages. During detection, we predict an extra visible box apart from the full-body box for every instance; during evaluation, we employ visible boxes instead of full-body boxes to match the “ignore region”. We apply the new pipeline to dozens of detection methods and validate the effectiveness of our pipeline in reducing the over-reporting of false positives and providing more reliable evaluation results.

References

【1】
【1】
 
 
Visual Intelligence

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Li G, Li X, Zhang S, et al. Towards more reliable evaluation in pedestrian detection by rethinking “ignore regions”. Visual Intelligence, 2024, 2. https://doi.org/10.1007/s44267-024-00036-z

465

Views

8

Crossref

Received: 02 July 2023
Revised: 16 January 2024
Accepted: 17 January 2024
Published: 22 February 2024
© The Author(s) 2024.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.