Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
In the real world, where information is abundant and diverse across different modalities, understanding and utilizing various data types to improve retrieval systems is a key focus of research. Multimodal composite retrieval integrates diverse modalities such as text, image and audio to provide more accurate, personalized, and contextually relevant results. However, alongside retrieval, multimodal composite editing plays a crucial role in enabling users to refine or modify retrieved content through intuitive interactions, which enhances the overall effectiveness of multimodal systems. The task of multimodal composite editing is becoming increasingly critical due to its applications in various domains, including creative industries, education, and user-driven content modification. A comprehensive evaluation and usage guide is needed to fully assess its capabilities and limitations, since it complements and extends the functionalities provided by multimodal retrieval systems. To facilitate a deeper understanding of this promising direction, this survey explores multimodal composite editing and retrieval in depth, covering image-text composite editing, image-text composite retrieval, and other multimodal composite retrieval. In this survey, we systematically organize the application scenarios, methods, benchmarks, experiments, and future directions. Multimodal learning has gained significant popularity in the era of large AI models, as demonstrated by the growing number of surveys in multimodal learning and vision-language models with Transformers. To the best of our knowledge, this survey is the first comprehensive review of the literature on multimodal composite retrieval, which is a timely complement of multimodal fusion to existing reviews. Moreover, this paper bridges the gap between large model architectures and their applications in both retrieval and editing tasks, highlighting their intertwined roles in advancing multimodal systems.
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Comments on this article