Sort:
Open Access Issue
Multimodal Representation Learning Based on Personalized Graph-Based Fusion for Mortality Prediction Using Electronic Medical Records
Big Data Mining and Analytics 2025, 8(4): 933-950
Published: 12 May 2025
Abstract PDF (1.2 MB) Collect
Downloads:8

Predicting mortality risk in the Intensive Care Unit (ICU) using Electronic Medical Records (EMR) is crucial for identifying patients in need of immediate attention. However, the incompleteness and the variability of EMR features for each patient make mortality prediction challenging. This study proposes a multimodal representation learning framework based on a novel personalized graph-based fusion approach to address these challenges. The proposed approach involves constructing patient-specific modality aggregation graphs to provide information about the features associated with each patient from incomplete multimodal data, enabling the effective and explainable fusion of the incomplete features. Modality-specific encoders are employed to encode each modality feature separately. To tackle the variability and incompleteness of input features among patients, a novel personalized graph-based fusion method is proposed to fuse patient-specific multimodal feature representations based on the constructed modality aggregation graphs. Furthermore, a MultiModal Gated Contrastive Representation Learning (MMGCRL) method is proposed to facilitate capturing adequate complementary information from multimodal representations and improve model performance. We evaluate the proposed framework using the large-scale ICU dataset, MIMIC-III. Experimental results demonstrate its effectiveness in mortality prediction, outperforming several state-of-the-art methods.

Open Access Issue
Segmentation-Guided Deep Learning for Glioma Survival Risk Prediction with Multimodal MRI
Big Data Mining and Analytics 2025, 8(2): 364-382
Published: 28 January 2025
Abstract PDF (3 MB) Collect
Downloads:37

Glioma survival risk prediction is of great significance for the individualized treatment and assessment programs. Currently, most deep learning based survival prediction paradigms rely on invasive and expensive histopathology and genomics methods. However, magnetic resonance imaging (MRI) has emerged as a promising non-invasive alternative with significant prognostic potential. To leverage the benefits of MRI, we propose a segmentation-guided fully automated multimodal MRI-based survival network (SGS-Net), which can simultaneously perform glioma segmentation and survival risk prediction. Specifically, the task interrelation is addressed using a hybrid convolutional neural network-Transformer (CNN-Transformer) encoder to represent the shared high-level semantic features by co-training a decoder for glioma segmentation and a Cox model for survival prediction. Then, to ensure the effective representation of the high-level features, glioma segmentation as an auxiliary task is utilized to guide survival prediction by jointly optimizing the segmentation loss and the Cox partial log-likelihood loss. Furthermore, a pair-wise ranking loss is designed to allow the network to learn the survival difference between patients. To balance the multi-task losses, an uncertain weight manner is adopted to adaptively adjust the weights for preventing task bias. Finally, the proposed SGS-Net is assessed using a publicly available multi-institutional dataset. Experimental and visual results show that SGS-Net achieves promising segmentation performance and obtains a C-index of 81.07% for survival risk prediction, which outperforms several existing state-of-the-art methods and even histopathology-based methods. In addition, Kaplan-Meier survival analysis confirms that the prognosis risk generated by SGS-Net is consistent with the prior prognosis based on the grading or genotyping paradigms.

Total 2