Abstract
At present, some aging populations, such as those in Japan, face an underlying risk of inadequate medical resources. Using neural networks to assist doctors in locating the aorta in patients via computed tomography (CT) before surgery is a task with practical value. While UNet and some of its derived models are efficient for the semantic segmentation of optimally contrast-enhanced CT images, their segmentation accuracy on poorly or non-contrasted CT images is too low to provide usable results. To solve this problem, we propose a data-processing module based on the physical-spatial structure and anatomical properties of the aorta, which we call the Automatic Spatial Contrast Module. In an experiment using UNet, Attention UNet, TransUNet, and Swin-UNet as baselines, modified versions of these models using the proposed Automatic Spatial Contrast (ASC) Module showed improvements of up to 24.84% in the Intersection-over-Union (IoU) and 28.13% in the Dice Similarity Coefficient (DSC). Furthermore, the proposed approach entails only a small increase in GPU memory when compared with the baseline models.