Abstract
Accurate diagnosis of Alzheimer’s Disease (AD) is essential for early intervention. Traditional methods relying on single-modality data often fail to capture the complexity of the disease, limiting diagnostic accuracy. Integrating multimodal data, such as structural Magnetic Resonance Imaging (sMRI) and Single Nucleotide Polymorphism (SNP) data, can provide a more comprehensive understanding of AD. However, existing multimodal fusion methods often overlook the intricate relationships among different data types, resulting in suboptimal performance. To address these challenges, we propose a novel graph-based multimodal fusion framework for AD prediction. The framework constructs brain and gene ontology networks using domain-specific prior knowledge from sMRI and SNP data. It leverages Graph Convolutional Networks (GCN) to extract deep features from each modality and employs a cross-attention mechanism to dynamically weigh feature importance across modalities. Additionally, a Correlation-Aware Learning (CAL) module explicitly models inter-modal correlations, enhancing the interpretability and robustness of the fusion. We validate the effectiveness of our framework using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Results show that our framework significantly outperforms traditional methods in classification accuracy and feature representation. Our method enables accurate AD diagnosis by integrating multimodal data and explicitly modeling inter-modal correlations. It enhances the interpretability of multimodal integration and provides new insights into the genetic and structural mechanisms underlying AD, serving as a valuable tool for clinical diagnosis and research in neurodegenerative diseases.
京公网安备11010802044758号
Comments on this article