Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
With the rapid proliferation of intelligent sensors, edge devices, and AI-driven applications, enormous volumes of visual data are being generated from a wide range of sources—such as RGB cameras, depth sensors, LiDAR, thermal imagers, aerial drones, radar, and even knowledge graphs. Effectively integrating and interpreting these heterogeneous visual sources has become a critical challenge in the field of artificial intelligence, computer vision, and cognitive computing.
Traditional single-source visual models often fall short in complex real-world environments due to noise, occlusion, or modality limitations. Multi-source visual fusion aims to overcome these constraints by combining complementary information from diverse data modalities, enabling more robust perception and high-level scene understanding. Beyond simple fusion, the next frontier lies in achieving cognitive-level intelligence—where machines not only see but understand, reason, and interpret multimodal visual data in a human-like way.
This Special Issue aims to bring together the latest advances in multi-source visual fusion, deep learning architectures, cross-modal representation learning, and neuro-symbolic reasoning. We are particularly interested in innovative solutions that bridge the gap between low-level perception and high-level cognitive understanding, enabling machines to make trustworthy and explainable decisions in complex scenarios.
Scope of Topics:
Submission Guidelines
Authors should prepare papers in accordance with the format requirements of Tsinghua Science and Technology, with reference to the Instruction given at https://www.sciopen.com/journal/1007-0214
, and submit the complete manuscript through the online manuscript submission system at https://mc03.manuscriptcentral.com/tst with manuscript type as “Special Issue on Multi-Source Visual Fusion and Intelligence: From Perception to Cognitive Understanding”.
Important Dates
Deadline for submissions: December 31, 2025
Guest Editors