Abstract
OBJECTIVES: This study aims to develop a deep learning-based model for the automatic detection of fenestration and dehiscence in Cone Beam Computed Tomography (CBCT) images, providing a quantitative tool for diagnosing alveolar bone defects. METHODS: Utilizing 10,752 manually annotated sagittal CBCT dental images, the Shifted Window Transformer U-Net (Swin UNETR) model was trained to automatically measure and diagnose fenestration and dehiscence. Model performance was evaluated based on key point localization accuracy, length measurement accuracy, and disease detection performance. Heatmaps were employed for visual identification of disease locations. RESULTS: The Swin UNETR model achieved key point recognition rates of 92.97%-99.09% for fenestration and dehiscence. Predicted lengths for all defect sites showed strong correlation with actual measurements. Disease diagnosis accuracy ranged from 0.8228 to 0.9476. The model demonstrated robust performance in key point identification, defect length quantification, and disease diagnosis. CONCLUSION: The deep learning model enables precise localization and quantitative measurement of fenestration and dehiscence in CBCT images. This approach enhances diagnostic efficiency and accuracy in detecting fenestration and dehiscence, facilitating preoperative orthodontic risk assessment and personalized treatment planning.