Abstract
We present a deep learning-based approach for accurate bone segmentation directly from routine T1-weighted MRI scans, with the goal of enabling MRI-only virtual surgical planning in head and neck oncology. Current workflows rely on CT for bone modeling and MRI for tumor delineation, introducing challenges related to image registration, radiation exposure, and resource use. To address this, we trained a deep neural network using CT-based segmentations of the mandible, cranium, and inferior alveolar nerve as ground truth. A dataset of 100 patients with paired CT and MRI scans was collected. MRI scans were resampled to the voxel size of CT, and corresponding CT segmentations were rigidly aligned to MRI. The model was trained on 80 cases and evaluated on 20 cases using Dice similarity coefficient, Intersection over Union (IoU), precision, and recall. The network achieved a mean Dice of 0.86 (SD ± 0.03), IoU of 0.76 (SD ± 0.05), and both precision and recall of 0.86 (SD ± 0.05). Surface deviation analysis between CT- and MRI-derived bone models showed a median deviation of 0.21 mm (IQR 0.05) for the mandible and 0.30 mm (IQR 0.05) for the cranium. These results demonstrate that accurate CT-like bone models can be derived from standard MRI, supporting the feasibility of MRI-only surgical planning.