Abstract
Accurate classification of bone tumors as benign, malignant, or intermediate is crucial for patient treatment decisions. Misclassification may result in overtreatment of benign cases or delayed intervention for aggressive tumors, significantly impacting patient prognosis. However, current methods rely heavily on single-modality imaging analysis, making it difficult to handle variable lesion locations and complex cancer types. To address these limitations, we propose a novel multimodal deep learning framework that integrates clinical images, pathological slices, and blood biomarkers for automated bone tumor detection and three-class classification. The framework operates in two stages: first, a YOLOv5-based detection model localizes tumor regions on clinical images. Next, a classification model utilizes ResNet to extract deep features from both the clinical images and pathological slices, while abnormal blood biomarkers are transformed into descriptive text by a large language model and subsequently encoded into semantic features using BioBERT. Finally, features from all three modalities are integrated via a fusion module to capture complementary information and enable accurate tumor classification. The evaluation was performed using two distinct datasets: a clinical imaging dataset for bone tumor detection, and a separate multi-modal cohort comprising clinical imaging, pathology, and blood biomarkers for tumor classification. The detection model demonstrated strong localization capabilities, achieving a test mAP@0.5 of 0.7925. For the classification task, ablation studies validated the complementary contribution of each modality. Notably, our multimodal fusion approach outperformed unimodal baselines, attaining a macro-average precision of 0.9056, F1-score of 0.8736, and AUC of 0.9759 in tumor classification-outperforming existing models.