AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research | Open Access

DriveMLM: aligning multi-modal large language models with behavioral planning states for autonomous driving

Erfei Cui1,2Wenhai Wang2,3Zhiqi Li2,4Jiangwei Xie5Haoming Zou6Hanming Deng5Gen Luo2Lewei Lu5Xizhou Zhu7Jifeng Dai7,8 ( )
Shanghai Jiao Tong University, Shanghai, 200240, China
Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
The Chinese University of Hong Kong, Hong Kong, 999077, China
Nanjing University, Nanjing, 210023, China
SenseTime Research, Shanghai, 200233, China
Stanford University, Stanford, CA 94305, USA
Tsinghua University, Beijing, 100084, China
Beijing National Research Center for Information Science and Technology, Beijing, 100084, China
Show Author Information

Abstract

Large language models (LLMs) have opened up new possibilities for intelligent agents, endowing them with human-like thinking and cognitive abilities. In this work, we delve into the potential of large language models (LLMs) in autonomous driving (AD). We introduce DriveMLM, an LLM-based AD framework that can perform close-loop autonomous driving in realistic simulators. To this end, (1) we bridge the gap between the language decisions and the vehicle control commands by standardizing the decision states according to the off-the-shelf motion planning module. (2) We employ a multimodal LLM (MLLM) to model the behavior planning module of a module AD system, which uses driving rules, user commands, and inputs from various sensors (e.g., camera, LiDAR) as input and makes driving decisions and provide explanations. This model can plug-and-play in existing AD systems such as Autopilot and Apollo for close-loop driving. (3) We design an effective data engine to collect a dataset that includes decision state and corresponding explanation annotation for model training and evaluation. We conduct extensive experiments and show that replacing the decision-making modules of the Autopilot and Apollo with DriveMLM resulted in significant improvements of 3.2 and 4.7 points on the CARLA Town05 Long, respectively, demonstrating the effectiveness of our model. We hope this work can serve as a baseline for autonomous driving with LLMs.

References

【1】
【1】
 
 
Visual Intelligence
Article number: 22

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Cui E, Wang W, Li Z, et al. DriveMLM: aligning multi-modal large language models with behavioral planning states for autonomous driving. Visual Intelligence, 2025, 3: 22. https://doi.org/10.1007/s44267-025-00095-w

922

Views

1

Crossref

Received: 26 June 2025
Revised: 23 October 2025
Accepted: 28 October 2025
Published: 03 December 2025
© The Author(s) 2025.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.