Abstract
IntroductionThis study evaluates the effectiveness of a lightweight vision transformer (EfficientFormerV2-S2) with a dual-output architecture for lung nodule classification, assessing its performance and generalizability across multiple datasets.MethodsThe study utilized datasets from three sources: Institution 1 (936 images), Institution 2 (280 images), and a public Zenodo dataset (308 images), comprising adenocarcinoma, squamous cell carcinoma, and benign lesions. Model evaluation included holdout validation, five-fold cross-validation, and benchmarking against the PneumoniaMedMNIST dataset. Comprehensive image preprocessing and augmentation techniques were implemented.ResultsThe model demonstrated robust performance across all datasets, achieving test accuracies of 92.62 ± 1.65%, 97.14 ± 1.78%, and 95.74 ± 1.35% for Institutions 1, 2, and Zenodo respectively. Cross-validation results showed consistent performance with minimal variability (standard deviations <2%). On the PneumoniaMedMNIST benchmark, our optimized model achieved superior performance (accuracy: 0.936, AUC: 0.981) compared to ResNet18 and ResNet50 benchmarks.ConclusionThe lightweight transformer-based model demonstrates excellent performance and generalizability across multiple institutional datasets, suggesting its potential for efficient clinical implementation in lung nodule classification tasks.