Multimodal-Imaging-Based Interpretable Deep Learning Framework for Distinguishing Brucella from Tuberculosis Spondylitis: A Dual-Center Study

基于多模态成像的可解释深度学习框架用于区分布鲁氏菌性脊柱炎和结核性脊柱炎:一项双中心研究

阅读:1

Abstract

Objectives: Brucella spondylitis (BS) and tuberculosis spondylitis (TS) are two causes of infection that share overlapping clinical and imaging features, complicating diagnoses. Early differentiation is critical, as treatment regimens differ significantly. This study aims to develop a deep learning framework using multimodal computed tomography (CT) and magnetic resonance imaging (MRI) data to accurately distinguish between these two conditions, improving diagnostic accuracy and patient outcomes. Methods: In this study, imaging data were acquired from two centers using different MRI and CT protocols. Sagittal T1-weighted (T1WI) and T2-weighted imaging (T2WI), fat-suppression sequences (T2WI FSE), and sagittal CT data were collected. Image preprocessing included region of interest (ROI) segmentation, and normalization and augmentation techniques were used. A deep learning model, based on pre-trained GoogleNet architectures, was trained and evaluated against human radiologists using metrics including accuracy, sensitivity, and AUC to assess diagnostic performance. Results: In this study, the GoogleNet deep learning model outperformed other architectures in classifying TS and BS, achieving AUCs of 95.97%, 91.24%, and 81.25% across training, test, and external validation datasets, respectively. In contrast, ResNet, DenseNet, and EfficientNet models showed lower AUC values. GoogleNet also demonstrated high accuracy (90.77% training, 83.04% test) and 90.91% sensitivity and 61.11% specificity in external validation. When compared to three radiologists, GoogleNet outperformed in diagnostic accuracy and speed, achieving an AUC of 88.01% and processing cases in 0.001 min. These findings highlight the potential of AI to enhance diagnostic performance and efficiency. Lastly, the explanation provided by the Grad-Cam model precisely localized major lesions. Conclusions: This multimodal-imaging-based deep learning model could well differentiate TS and BS. Deep learning does not need manual feature extraction, selection, or model development, and has great potential in daily clinical practice.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。