AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (15.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Video-Bench: A comprehensive benchmark and toolkit for evaluating video-based large language models

Shenzhen Graduate School, Peking University, Shenzhen 518055, China
Pengcheng Laboratory, Shenzhen 518000, China
Microsoft Cloud AI, Beijing 100080, China
Pandalla.ai, Singapore 048624, Singapore
Meta AI, Shanghai 201203, China
Show Author Information

Abstract

Video-based large language models (Video-LLMs) have been recently introduced, targeting both fundamental improvements in perception and comprehension, and a diverse range of user inquiries. In pursuit of the ultimate goal of achieving artificial general intelligence, a truly intelligent Video-LLM model should not only see and understand the surroundings, but also possess human-level commonsense, and make well-informed decisions for users. To guide the development of such a model, the establishment of a robust and comprehensive evaluation system becomes crucial. To this end, this paper proposes Video-Bench, a new comprehensive benchmark along with a toolkit specifically designed for evaluating Video-LLMs. The benchmark comprises 10 meticulously crafted tasks, evaluating the capabilities of Video-LLMs across three distinct levels: video-exclusive understanding, prior knowledge-based question-answering, and comprehension and decision-making. In addition, we introduce an automatic toolkit tailored to process model outputs for various tasks, facilitating the calculation of metrics and conveniently generating final scores. We evaluate 9 representative Video-LLMs using Video-Bench. The findings reveal that current Video-LLMs still fall considerably short of achieving human-like comprehension and analysis of real-world video, and offer valuable insights for future research directions. The benchmark and toolkit are available at https://github.com/PKU-YuanGroup/Video-Bench.

Graphical Abstract

References

【1】
【1】
 
 
Computational Visual Media
Pages 71-84

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Ning M, Zhu B, Xie Y, et al. Video-Bench: A comprehensive benchmark and toolkit for evaluating video-based large language models. Computational Visual Media, 2026, 12(1): 71-84. https://doi.org/10.26599/CVM.2025.9450516

1629

Views

86

Downloads

2

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 20 June 2025
Accepted: 01 October 2025
Published: 02 February 2026
© The Author(s) 2025.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

To submit a manuscript, please go to https://jcvm.org.