Abstract
Safety is the most critical problem in autonomous driving. Crashes often occur in long-tail scenarios, which are neither frequent nor representative of normal driving conditions. Many severe failures are not caused by a single error, but by the accumulation of coupled behaviors and/or environmental factors over time. These long-tail scenarios are difficult to evaluate using traditional open-loop safety analysis methods. To address the aforementioned challenges, the current study discusses how world models enable long-tail scenario generation. By using closed-loop inference, world models can capture how an agent’s own decisions influence the subsequent states and interactions. In addition, world models contribute to scenario-specific generation by enabling controllable conditioning and targeted intervention on agent behaviors and environmental factors. In future studies, how to avoid unrealistic hallucinations, maintain system-level evaluation, and address errors arising from long-term interactions and multi-step accumulations remain the key problems we are facing in the safety evaluation for autonomous driving.
京公网安备11010802044758号
Comments on this article