Abstract
INTRODUCTION: Identifying the thoracic vertebra visible on chest radiographs is a standard practice to assess proper position of a tube and catheter tips within their designated anatomical target regions in critically ill newborn infants. We introduce a fully automated deep learning system based on the nnU-Net architecture for segmenting and labeling T1, T7, and T12 in neonatal chest radiographs. METHODS: We retrospectively collect 14,660 neonatal chest radiographs from 10 university hospitals in Korea, including both infants with tubes or catheters and those without. All images were deidentified and annotated for T1, T7, and T12 vertebrae using rectangular bounding boxes, validated by pediatricians. We split the dataset into training (11,860), validation (1,400), and test (1,400) sets, maintaining an even distribution by gestational age and birth weight. RESULTS: The automatic segmentation algorithm demonstrated excellent agreement with human-annotated segmentation for the T1, T7 and T12 vertebrae [Dice similarity coefficient (DSC): 0.8327, 95% CI: 0.8237-0.8418; 0.8322, 95% CI: 0.8213-0.8432; 0.7998, 95% CI: 0.7864-0.8133, respectively]. To identify the approximate location of each vertebra, a relatively modest DSC threshold of 0.50 or 0.60 already yielded an accuracy above 90% for T1, T7, and T12. CONCLUSION: Our deep learning-based automated algorithm built on the nnU-Net framework could accurately segment and label T1, T7, and T12 thoracic vertebrae in neonatal chest radiographs. This artificial intelligence-driven approach can map anatomical target regions based on thoracic vertebrae for assessing the positioning of a tube and catheter tips in a neonatal intensive care unit.