Robust prostate cancer risk stratification from unregistered mpMRI via learned cross-modal correspondence

利用学习到的跨模态对应关系,从未注册的多参数磁共振成像数据中实现稳健的前列腺癌风险分层

阅读:2

Abstract

BACKGROUND AND OBJECTIVE: Accurate prostate cancer risk stratification benefits from the fusion of T2-weighted (T2WI) and Apparent Diffusion Coefficient (ADC) MRI. However, patient motion and imaging distortions frequently cause spatial misalignments between these sequences. While radiologists compensate for this via subjective cognitive fusion, the process introduces inter-reader variability and can be particularly challenging in equivocal cases. Conventional fusion models are even more vulnerable, as they require perfect image registration, making them brittle in real-world clinical scenarios. We aimed to develop and validate a deep learning framework that overcomes these limitations by robustly fusing unregistered mpMRI data. METHODS: We retrospectively analyzed a cohort of 300 consecutive men (mean age, 71.5 ± 7.6 years) who underwent pre-biopsy prostate mpMRI at our institution between January 2021 and May 2023. All included patients had pathologically confirmed prostate cancer, with high-risk prostate cancer, as defined by NCCN guidelines, present in 184 of 300 cases (61.3%). The dataset was partitioned chronologically into a development cohort (n=250) for 5-fold cross-validation and a temporal test cohort (n=50) for independent evaluation. We developed Cross-Modal Optimal Transport Fusion (CMOT-Fusion), a deep learning framework that learns to identify and match diagnostically relevant regions between misaligned T2WI and ADC images. This approach enables robust multimodal fusion without requiring an explicit image registration step. RESULTS: For discriminating NCCN high-risk versus low/intermediate-risk disease among pathologically confirmed prostate cancer cases, CMOT-Fusion achieved a mean Area Under the Curve (AUC) of 0.849 ± 0.034 in 5-fold cross-validation, outperforming single-modality baselines and conventional fusion methods. On an independent test set, the model's performance remained robust, with an ensemble AUC of 0.824 (95% CI: 0.694-0.930; ensemble probability computed as the mean of the five fold-specific model probabilities per patient). As a cohort-specific clinical reference based on routine radiology suspicion scoring, PI-RADS v2.1 achieved an AUC of 0.839 (95% CI: 0.726-0.930) on the same test cohort. CONCLUSION: Our results demonstrate that learning a direct correspondence between unregistered mpMRI sequences significantly improves prostate cancer risk stratification. The proposed CMOT-Fusion framework offers a robust solution to the common clinical problem of inter-sequence misalignment, potentially enhancing diagnostic reliability and streamlining clinical workflows by removing the need for a separate image registration step. Given the single-center retrospective design and the small independent test cohort, these findings should be considered exploratory and warrant multi-center prospective validation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。