Text-guided multimodal deep learning in magnetic resonance imaging for spinal structures segmentation and lumbar abnormalities identification

基于文本引导的多模态深度学习在磁共振成像中的应用,用于脊柱结构分割和腰椎异常识别。

阅读:2

Abstract

BACKGROUND: In the domain of recent lumbar magnetic resonance imaging (MRI) analysis, the limitations of unimodal deep learning (DL) methods for medical imaging have gradually been revealed in that they remain insufficient to model the latent relationships within the textual semantics of category terms. These terms are used to define the research targets in medical imaging analysis tasks. Effective modeling of these relationships requires textual representations. Therefore, we proposed a text-guided multimodal DL method for segmenting 19 spinal structures and identifying 5 lumbar abnormalities on T1-weighted imaging (T1WI) and T2-weighted imaging (T2WI) MRI scans. METHODS: We employed ConvNeXt V2 as the image encoder (pretrained on 1,975 unlabeled lumbar MRI scans) and the Contrastive Language-Image Pretraining (CLIP)-based text encoder (pretrained on 515 clinical reports), with the datasets for each modality independently supporting self-supervised pretraining of its respective encoder. We completed segmentation annotations for vertebral and intervertebral disc structures in a partially labeled dataset originally containing only lumbar abnormalities, forming 201 cases of fully annotated T1WI and T2WI MRI samples for model development. We developed a text-guided DL method, which integrated the text encoder and the image encoder, achieving precise segmentation of spinal structures and identification of lumbar abnormalities. RESULTS: The proposed method achieved a mean Intersection over Union (mIoU) of 0.823±0.053 for segmentation of 19 spinal structures. No statistically significant differences were observed between upper-Dice (0.859±0.040) and lower-Dice (0.858±0.038) metrics across all regions of interest (ROIs) (P=0.744, Cohen's d =0.01). Our method outperformed nnU-Net (mIoU: 0.823 vs. 0.806, P<0.01), the MT-U-Net (mIoU: 0.766±0.073, P<0.01), and the visual-only variant (Ours-VisualOnly, mIoU: 0.791, P<0.01). For lumbar abnormality identification, our method achieved the highest recall (0.867±0.027), lowest false-positive rate (FPR) (0.079±0.015), and highest precision (0.893±0.028), outperforming comparative models (all P<0.01 for recall, P<0.05 for FPR and precision). CONCLUSIONS: This study proposed a text-guided DL method for segmenting spinal structures and identifying lumbar abnormalities in multi-sequence MRI scans. By integrating cross-modal features, our method improved segmentation accuracy and optimized identification performance, demonstrating clinical potential to assist diagnosis through automated workflows that reduce radiologist workload.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。