Abstract
Background/Objectives: Breast cancer continues to be one of the primary causes of death among women worldwide, emphasizing the necessity for accurate and efficient diagnostic approaches. This work focuses on developing an automated diagnostic framework based on convolutional neural networks (CNNs) capable of handling multiple imaging modalities. Methods: The proposed CNN model are trained and evaluated on several benchmark datasets, including mammography (DDSM, MIAS, INbreast), ultrasound, magnetic resonance imaging (MRI), and histopathology (BreaKHis). Standardized preprocessing procedures were applied across all datasets, and the outcomes were compared with leading state-of-the-art techniques. Results: The model attained strong classification performance with accuracy scores of 99.2% (DDSM), 98.97% (MIAS), 99.43% (INbreast), 98.00% (Ultrasound), 98.43% (MRI), and 86.42% (BreaKHis). These findings indicate superior results compared to many existing approaches, confirming the robustness of the method. Conclusions: This study introduces a reliable and scalable diagnostic system that can support radiologists in early breast cancer detection. Its high accuracy, efficiency, and adaptability across different imaging modalities make it a promising tool for integration into clinical practice.