AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (21.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access | Just Accepted

Graph-Based Multimodal Fusion Framework with Correlation-Aware Learning for Alzheimer’s Disease Prediction

Chang LiShuang FengLetian WangJinrui HouXiaohua Wan( )Fa Zhang( )Bin Hu( )

Key Laboratory of Brain Health Intelligent Evaluation and Intervention affiliated to Ministry of Education and School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China

Show Author Information

Abstract

Accurate diagnosis of Alzheimer’s Disease (AD) is essential for early intervention. Traditional methods relying on single-modality data often fail to capture the complexity of the disease, limiting diagnostic accuracy. Integrating multimodal data, such as structural Magnetic Resonance Imaging (sMRI) and Single Nucleotide Polymorphism (SNP) data, can provide a more comprehensive understanding of AD. However, existing multimodal fusion methods often overlook the intricate relationships among different data types, resulting in suboptimal performance. To address these challenges, we propose a novel graph-based multimodal fusion framework for AD prediction. The framework constructs brain and gene ontology networks using domain-specific prior knowledge from sMRI and SNP data. It leverages Graph Convolutional Networks (GCN) to extract deep features from each modality and employs a cross-attention mechanism to dynamically weigh feature importance across modalities. Additionally, a Correlation-Aware Learning (CAL) module explicitly models inter-modal correlations, enhancing the interpretability and robustness of the fusion. We validate the effectiveness of our framework using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Results show that our framework significantly outperforms traditional methods in classification accuracy and feature representation. Our method enables accurate AD diagnosis by integrating multimodal data and explicitly modeling inter-modal correlations. It enhances the interpretability of multimodal integration and provides new insights into the genetic and structural mechanisms underlying AD, serving as a valuable tool for clinical diagnosis and research in neurodegenerative diseases.

References

【1】
【1】
 
 
Big Data Mining and Analytics

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Li C, Feng S, Wang L, et al. Graph-Based Multimodal Fusion Framework with Correlation-Aware Learning for Alzheimer’s Disease Prediction. Big Data Mining and Analytics, 2025, https://doi.org/10.26599/BDMA.2025.9020054

616

Views

63

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 31 January 2025
Revised: 14 April 2025
Accepted: 06 May 2025
Available online: 03 September 2025

© The author(s) 2025

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).