AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Review | Open Access

A survey of multimodal composite editing and retrieval

Suyan Li1,Fuxiang Huang1,2,Lei Zhang1,3 ( )
Chongqing Key Laboratory of Bio-perception and Multimodal Intelligent Information Processing, Chongqing University, Chongqing 400044, China
Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China

Equal contributors

Show Author Information

Abstract

In the real world, where information is abundant and diverse across different modalities, understanding and utilizing various data types to improve retrieval systems is a key focus of research. Multimodal composite retrieval integrates diverse modalities such as text, image and audio to provide more accurate, personalized, and contextually relevant results. However, alongside retrieval, multimodal composite editing plays a crucial role in enabling users to refine or modify retrieved content through intuitive interactions, which enhances the overall effectiveness of multimodal systems. The task of multimodal composite editing is becoming increasingly critical due to its applications in various domains, including creative industries, education, and user-driven content modification. A comprehensive evaluation and usage guide is needed to fully assess its capabilities and limitations, since it complements and extends the functionalities provided by multimodal retrieval systems. To facilitate a deeper understanding of this promising direction, this survey explores multimodal composite editing and retrieval in depth, covering image-text composite editing, image-text composite retrieval, and other multimodal composite retrieval. In this survey, we systematically organize the application scenarios, methods, benchmarks, experiments, and future directions. Multimodal learning has gained significant popularity in the era of large AI models, as demonstrated by the growing number of surveys in multimodal learning and vision-language models with Transformers. To the best of our knowledge, this survey is the first comprehensive review of the literature on multimodal composite retrieval, which is a timely complement of multimodal fusion to existing reviews. Moreover, this paper bridges the gap between large model architectures and their applications in both retrieval and editing tasks, highlighting their intertwined roles in advancing multimodal systems.

References

【1】
【1】
 
 
Visual Intelligence
Article number: 15

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Li S, Huang F, Zhang L. A survey of multimodal composite editing and retrieval. Visual Intelligence, 2025, 3: 15. https://doi.org/10.1007/s44267-025-00086-x

603

Views

3

Crossref

Received: 07 February 2025
Revised: 24 June 2025
Accepted: 25 June 2025
Published: 06 November 2025
© The Author(s) 2025.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.