Deep Learning-Based Localization and Orientation Estimation of Pedicle Screws in Spinal Fusion Surgery

基于深度学习的脊柱融合手术中椎弓根螺钉的定位和方向估计

阅读:1

Abstract

OBJECTIVE: This study investigated the application of a deep learning-based object detection model for accurate localization and orientation estimation of spinal fixation surgical instruments during surgery. METHODS: We employed the You Only Look Once (YOLO) object detection framework with oriented bounding boxes (OBBs) to address the challenge of non-axis-aligned instruments in surgical scenes. The initial dataset of 100 images was created using brochure and website images from 11 manufacturers of commercially available pedicle screws used in spinal fusion surgeries, and data augmentation was used to expand 300 images. The model was trained, validated, and tested using 70%, 20%, and 10% of the images of lumbar pedicle screws, with the training process running for 100 epochs. RESULTS: The model testing results showed that it could detect the locations of the pedicle screws in the surgical scene as well as their direction angles through the OBBs. The F1 score of the model was 0.86 (precision: 1.00, recall: 0.80) at each confidence level and mAP50. The high precision suggests that the model effectively identifies true positive instrument detections, although the recall indicates a slight limitation in capturing all instruments present. This approach offers advantages over traditional object detection in bounding boxes for tasks where object orientation is crucial, and our findings suggest the potential of YOLOv8 OBB models in real-world surgical applications such as instrument tracking and surgical navigation. CONCLUSION: Future work will explore incorporating additional data and the potential of hyperparameter optimization to improve overall model performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。