Automatic segmentation and PI-RADS grading of prostate cancer for biparametric MRI

前列腺癌双参数磁共振成像的自动分割和PI-RADS分级

阅读:1

Abstract

The annual mortality rate from prostate cancer (PCa), a common malignant neoplasm affecting middle-aged and elderly men, is on the rise. Biparametric magnetic resonance imaging (bpMRI) is indispensable to PCa imaging analysis since it can capture distinct disease-related information from two modalities that exhibit synergistic performance. The majority of state-of-the-art PCa diagnostic techniques currently available are focus on a single modality or task, neglecting the information sharing across the two modalities and task correlations inherent in multi-task learning. We provide a dual-modality image fusion and multi-task learning model that can accomplish both automatic PI-RADS grading and prostate and PCa region segmentation simultaneously. First, to extract complementary information between the prostate and PCa in bimodal images via T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) feature extraction, a shared block fusion module and an independent encoder block were developed; Subsequently, in the encoder stage, the dual visual attention module was designed to extract features from multiple receptive field and deliver more accurate contextual information, and a novel decoder was designed to effectively integrate encoder features, yielding more refined global and local detail information; Next, to capture more precise detail information during the classification task stage, a high-level feature fusion technique was developed; To address class imbalance, a multitask mixed loss function is finally suggested. The segmentation results of prostate and PCa on multiple diverse male pelvic MRI datasets demonstrate the superior performance of our proposed method. Both the basic performance evaluation and comparative model evaluation of the proposed model have validated its effectiveness in prostate and PCa segmentation as well as PI-RADS automatic grading. External validation on the independent PROMISE12 dataset further confirms the strong generalizability of our model across different institutions, scanning devices and patient cohorts.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。