Tsinghua Science and Technology Open Access Editor-in-Chief: Jiaguang SUN
Home Tsinghua Science and Technology Notice List CFP–Special Issue on Multi-Source Visual Fusion and Intelligence: From Perception to Cognitive Understanding
CFP–Special Issue on Multi-Source Visual Fusion and Intelligence: From Perception to Cognitive Understanding

With the rapid proliferation of intelligent sensors, edge devices, and AI-driven applications, enormous volumes of visual data are being generated from a wide range of sources—such as RGB cameras, depth sensors, LiDAR, thermal imagers, aerial drones, radar, and even knowledge graphs. Effectively integrating and interpreting these heterogeneous visual sources has become a critical challenge in the field of artificial intelligence, computer vision, and cognitive computing.

 

Traditional single-source visual models often fall short in complex real-world environments due to noise, occlusion, or modality limitations. Multi-source visual fusion aims to overcome these constraints by combining complementary information from diverse data modalities, enabling more robust perception and high-level scene understanding. Beyond simple fusion, the next frontier lies in achieving cognitive-level intelligence—where machines not only see but understand, reason, and interpret multimodal visual data in a human-like way.

 

This Special Issue aims to bring together the latest advances in multi-source visual fusion, deep learning architectures, cross-modal representation learning, and neuro-symbolic reasoning. We are particularly interested in innovative solutions that bridge the gap between low-level perception and high-level cognitive understanding, enabling machines to make trustworthy and explainable decisions in complex scenarios.

 

Scope of Topics:

  1. Heterogeneous sensor data integration (e.g., RGB-Depth-Thermal, LiDAR-Image fusion)
  2. Spatio-temporal fusion of video and static visual data
  3. Multi-view and multi-perspective image fusion
  4. Transformer architectures for multimodal fusion
  5. Vision-based commonsense and knowledge reasoning
  6. Causality, abstraction, and concept grounding in vision
  7. Self-supervised and contrastive learning in multi-source settings
  8. Graph neural networks for cross-source visual representation
  9. Human-computer interaction with multi-modal perception

 

Submission Guidelines

Authors should prepare papers in accordance with the format requirements of Tsinghua Science and Technology, with reference to the Instruction given at https://www.sciopen.com/journal/1007-0214

, and submit the complete manuscript through the online manuscript submission system at https://mc03.manuscriptcentral.com/tst with manuscript type as “Special Issue on Multi-Source Visual Fusion and Intelligence: From Perception to Cognitive Understanding”.

 

Important Dates

Deadline for submissions: December 31, 2025

 

Guest Editors

  • Victor Hugo C. de Albuquerque, Federal University of Ceará, Brazil
  • Witold Pedrycz, University of Alberta, Canada
  • Weiwei Jiang, Beijing University of Posts and Telecommunications, China
  • Xin Ning, Institute of Semiconductors, Chinese Academy of Sciences, China
  • Zhili Zhou, Guangzhou University, China
  • Yuan He, Tsinghua University, China