SegmentAnyTooth: An open-source deep learning framework for tooth enumeration and segmentation in intraoral photos

SegmentAnyTooth:一个用于口内照片中牙齿计数和分割的开源深度学习框架

阅读:1

Abstract

BACKGROUND/PURPOSE: Preventive dentistry is essential for maintaining public oral health, but inequalities in dental care, especially in underserved areas, remain a significant challenge. Image-based dental analysis, using intraoral photographs, offers a practical and scalable approach to bridge this gap. In this context, we developed SegmentAnyTooth, an open-source deep learning framework that solves the critical first step by enabling automated tooth enumeration and segmentation across five standard intraoral views: upper occlusal, lower occlusal, frontal, right lateral, and left lateral. This tool lays the groundwork for advanced applications, reducing reliance on limited professional resources and enhancing access to preventive dental care. MATERIALS AND METHODS: A dataset of 5000 intraoral photos from 1000 sets (953 subjects) was annotated with tooth surfaces and FDI notations. You Only Look Once 11 (YOLO11) nano models were trained for tooth localization and enumeration, followed by Light Segment Anything in High Quality (Light HQ-SAM) for segmentation using an active learning approach. RESULTS: SegmentAnyTooth demonstrated high segmentation accuracy, with mean Dice similarity coefficients (DSC) of 0.983 ± 0.036 for upper occlusal, 0.973 ± 0.060 for lower occlusal, and 0.920 ± 0.063 for frontal views. Lateral view models also performed well, with mean DSCs of 0.939 ± 0.070 (right) and 0.945 ± 0.056 (left). Statistically significant improvements over baseline models such as U-Net, nnU-Net, and Mask R-CNN were observed (Wilcoxon signed-rank test, P < 0.01). CONCLUSION: SegmentAnyTooth provides accurate, multi-view tooth segmentation to enhance dental care, early diagnosis, individualized care, and population-level research. Its open-source design supports integration into clinical and public health workflows, with ongoing improvements focused on generalizability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。