Abstract
OBJECTIVE: To develop, validate, and compare four three-dimensional (3D) convolutional neural network (CNN) models for differentiating ground-glass nodules (GGNs) on non-contrast chest computed tomography (CT) scans, specifically classifying them as adenomatous hyperplasia (AAH)/adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IA). MATERIALS AND METHODS: This multi-center study retrospectively enrolled 4284 consecutive patients with surgically resected and pathologically confirmed AAH/AIS, MIA, or IA from four hospitals between January 2015 and December 2023. GGNs were randomly partitioned into a training set (n = 3083, 72 %) and a validation set (n = 1277, 28 %). Four 3D deep learning models (Res2Net 3D, DenseNet3D, ResNet50 3D, Vision Transformer 3D) were implemented for GGN segmentation and three-class classification. Additionally, variants of the Res2Net 3D model were developed by incorporating clinical and CT features: Res2Net 3D_w2 (sex, age), Res2Net 3D_w6 (adding lesion size, location, and smoking history), and Res2Net 3D_w10 (sex, age, location, the mean, maximum, and standard deviation of CT attenuation, nodule volume, volume ratio, volume ratio within the left/right lung, and the maximum CT value of the entire lung). Model performance was evaluated using accuracy, recall, precision, F1-score, and area under the receiver operating characteristic curve (AUC). RESULTS: Res2Net 3D outperformed others, achieving AUCs of 0.91 (AAH/AIS), 0.88 (MIA), and 0.92 (IA). Its F1-scores were 0.416, 0.500, and 0.929, respectively. All Res2Net variants achieved accuracies between 0.83-0.84. CONCLUSION: The Res2Net 3D model accurately differentiates GGN subtypes using non-contrast CT, showing high performance, especially for invasive adenocarcinoma.