Abstract
Chest X-ray and HRCT are essential for diagnosing pneumonia and lung cancer, but their accuracy is limited. Hence, DeepScan, a multimodal AI combining CNNs trained on both imaging types, was developed using public datasets. The architecture included resnet-50 for X-rays, densenet-121 for HRCT and a late-fusion network. DeepScan outperformed single-modality models, achieving 94.6% accuracy, 95.2% sensitivity, 93.9% specificity and an AUC of 0.97 on 2,000 test patients. Multimodal integration reduced false negatives for early-stage lung cancer and improved differentiation from pneumonia, supporting earlier intervention and potentially enhancing clinical workflows.