AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (9.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

A Disentangled Representation-Based Multimodal Fusion Framework Integrating Pathomics and Radiomics for KRAS Mutation Detection in Colorectal Cancer

School of Computer Science and Technology, Xidian University, Xi’an 710071, China
School of Biomedical Engineering, University of Science and Technology of China, Hefei 230026, China
Department of General Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
Department of Pathology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
Show Author Information

Abstract

Kirsten rat sarcoma viral oncogene homolog (namely KRAS) is a key biomarker for prognostic analysis and targeted therapy of colorectal cancer. Recently, the advancement of machine learning, especially deep learning, has greatly promoted the development of KRAS mutation detection from tumor phenotype data, such as pathology slides or radiology images. However, there are still two major problems in existing studies: inadequate single-modal feature learning and lack of multimodal phenotypic feature fusion. In this paper, we propose a Disentangled Representation-based Multimodal Fusion framework integrating Pathomics and Radiomics (DRMF-PaRa) for KRAS mutation detection. Specifically, the DRMF-PaRa model consists of three parts: (1) the pathomics learning module, which introduces a tissue-guided Transformer model to extract more comprehensive and targeted pathological features; (2) the radiomics learning module, which captures the generic hand-crafted radiomics features and the task-specific deep radiomics features; (3) the disentangled representation-based multimodal fusion module, which learns factorized subspaces for each modality and provides a holistic view of the two heterogeneous phenotypic features. The proposed model is developed and evaluated on a multi modality dataset of 111 colorectal cancer patients with whole slide images and contrast-enhanced CT. The experimental results demonstrate the superiority of the proposed DRMF-PaRa model with an accuracy of 0.876 and an AUC of 0.865 for KRAS mutation detection.

References

【1】
【1】
 
 
Big Data Mining and Analytics
Pages 590-602

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Lv Z, Yan R, Lin Y, et al. A Disentangled Representation-Based Multimodal Fusion Framework Integrating Pathomics and Radiomics for KRAS Mutation Detection in Colorectal Cancer. Big Data Mining and Analytics, 2024, 7(3): 590-602. https://doi.org/10.26599/BDMA.2024.9020012

1475

Views

123

Downloads

9

Crossref

7

Web of Science

7

Scopus

0

CSCD

Received: 08 January 2024
Revised: 27 January 2024
Accepted: 28 February 2024
Published: 16 April 2024
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).