YOLOv12 Algorithm-Aided Detection and Classification of Lateral Malleolar Avulsion Fracture and Subfibular Ossicle Based on CT Images: Multicenter Study

基于CT图像的YOLOv12算法辅助检测和分类外踝撕脱骨折和腓骨下骨:多中心研究

阅读:1

Abstract

BACKGROUND: Lateral malleolar avulsion fractures (LMAFs) and subfibular ossicles (SFOs) are distinct entities that both present as small bone fragments near the lateral malleolus in imaging but require different treatment strategies. Clinical and radiological differentiation is challenging, which can impede timely and precise management. Magnetic resonance imaging (MRI) is the diagnostic gold standard for differentiating LMAFs from SFOs, whereas radiological differentiation using computed tomography (CT) alone is challenging in routine practice. Deep convolutional neural networks (DCNNs) have shown promise in musculoskeletal imaging diagnostics, but robust, multicenter evidence in this specific context is lacking. OBJECTIVE: This study aims to evaluate several state-of-the-art DCNNs-including the latest You Only Look Once (YOLO) v12 algorithm-for detecting and classifying LMAFs and SFOs in CT images, using MRI-based diagnoses as the gold standard and to compare model performance with radiologists reading CT alone. METHODS: In this retrospective study, 1918 patients (LMAF: n=1253, 65.3%; SFO: n=665, 34.7%) were enrolled from 2 hospitals in China between 2014 and 2024. MRI served as the gold standard and was independently interpreted by 2 senior musculoskeletal radiologists. Only CT images were used for model training, validation, and testing. CT images were manually annotated with bounding boxes. The cohort was randomly split into a training set (n=1092, 56.93%), internal validation set (n=476, 24.82%), and external test set (n=350, 18.25%). Four deep learning models-faster R-CNN, single shot multibox detector (SSD), RetinaNet, and YOLOv12-were trained and evaluated using identical procedures. Model performance was assessed using mean average precision at intersection over union=0.5 (mAP50), area under the receiver operating curve (AUC), accuracy, sensitivity, and specificity. The external test set was also independently interpreted by 2 musculoskeletal radiologists with 7 and 15 years of experience, with results compared with the best-performing model. Saliency maps were generated using Shapley values to enhance interpretability. RESULTS: Among the evaluated models, YOLOv12 achieved the highest detection and classification performance, with a mAP50 of 92.1% and an AUC of 0.983 on the external test set-significantly outperforming faster R-CNN (mAP50 63.7%; AUC 0.79); SSD (mAP50 63%; AUC 0.63); and RetinaNet (mAP50 67.0%; AUC 0.73)-all P<.001. When using CT alone, radiologists performed at a moderate level (accuracy: 75.6% and 69.1%; sensitivity: 75.0% and 65.2%; specificity: 76.0% and 71.1%), whereas YOLOv12 approached MRI-based reference performance (accuracy: 92.0%; sensitivity: 86.7%; specificity: 82.2%). Saliency maps corresponded well with expert-identified regions. CONCLUSIONS: While MRI (read by senior radiologists) is the gold standard for distinguishing LMAFs from SFOs, CT-based differentiation is challenging for radiologists. A CT-only DCNN (YOLOv12) achieved substantially higher performance than radiologists interpreting CT alone and approached the MRI-based reference standard, highlighting its potential to augment CT-based decision-making where MRI is limited or unavailable.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。