Abstract
Background and Objectives: Hip joint disorders exhibit diverse and overlapping radiological features, complicating early diagnosis and limiting the diagnostic value of single-modality imaging. Isolated imaging or clinical data may therefore inadequately represent disease-specific pathological characteristics. Materials and Methods: This retrospective study included 605 hip joints from Center A (2018-2024), comprising normal hips, osteoarthritis, osteonecrosis of the femoral head (ONFH), and femoroacetabular impingement (FAI). An independent cohort of 24 hips from Center B (2024-2025) was used for external validation. A multimodal deep learning framework was developed to jointly analyze radiographs, CT volumes, and clinical texts. Features were extracted using ResNet50, 3D-ResNet50, and a pretrained BERT model, followed by attention-based fusion for four-class classification. Results: The combined Clinical+X-ray+CT model achieved an AUC of 0.949 on the internal test set, outperforming all single-modality models. Improvements were consistently observed in accuracy, sensitivity, specificity, and decision curve analysis. Grad-CAM visualizations confirmed that the model attended to clinically relevant anatomical regions. Conclusions: Attention-based multimodal feature fusion substantially improves diagnostic performance for hip joint diseases, providing an interpretable and clinically applicable framework for early detection and precise classification in orthopedic imaging.