Abstract
BACKGROUND: In the domain of recent lumbar magnetic resonance imaging (MRI) analysis, the limitations of unimodal deep learning (DL) methods for medical imaging have gradually been revealed in that they remain insufficient to model the latent relationships within the textual semantics of category terms. These terms are used to define the research targets in medical imaging analysis tasks. Effective modeling of these relationships requires textual representations. Therefore, we proposed a text-guided multimodal DL method for segmenting 19 spinal structures and identifying 5 lumbar abnormalities on T1-weighted imaging (T1WI) and T2-weighted imaging (T2WI) MRI scans. METHODS: We employed ConvNeXt V2 as the image encoder (pretrained on 1,975 unlabeled lumbar MRI scans) and the Contrastive Language-Image Pretraining (CLIP)-based text encoder (pretrained on 515 clinical reports), with the datasets for each modality independently supporting self-supervised pretraining of its respective encoder. We completed segmentation annotations for vertebral and intervertebral disc structures in a partially labeled dataset originally containing only lumbar abnormalities, forming 201 cases of fully annotated T1WI and T2WI MRI samples for model development. We developed a text-guided DL method, which integrated the text encoder and the image encoder, achieving precise segmentation of spinal structures and identification of lumbar abnormalities. RESULTS: The proposed method achieved a mean Intersection over Union (mIoU) of 0.823±0.053 for segmentation of 19 spinal structures. No statistically significant differences were observed between upper-Dice (0.859±0.040) and lower-Dice (0.858±0.038) metrics across all regions of interest (ROIs) (P=0.744, Cohen's d =0.01). Our method outperformed nnU-Net (mIoU: 0.823 vs. 0.806, P<0.01), the MT-U-Net (mIoU: 0.766±0.073, P<0.01), and the visual-only variant (Ours-VisualOnly, mIoU: 0.791, P<0.01). For lumbar abnormality identification, our method achieved the highest recall (0.867±0.027), lowest false-positive rate (FPR) (0.079±0.015), and highest precision (0.893±0.028), outperforming comparative models (all P<0.01 for recall, P<0.05 for FPR and precision). CONCLUSIONS: This study proposed a text-guided DL method for segmenting spinal structures and identifying lumbar abnormalities in multi-sequence MRI scans. By integrating cross-modal features, our method improved segmentation accuracy and optimized identification performance, demonstrating clinical potential to assist diagnosis through automated workflows that reduce radiologist workload.