Abstract
Glioblastoma (GBM) is the most common and aggressive primary brain tumor in adults, with a median overall survival of fewer than 15 months despite standard-of-care treatment. Accurate preoperative prognostication is essential for personalized treatment planning; however, existing approaches rely primarily on magnetic resonance imaging (MRI) and often overlook the rich histopathological information contained in postoperative whole-slide images (WSIs). The inherent spatiotemporal gap between preoperative MRI and postoperative WSIs substantially hinders effective multimodal integration. To address this limitation, we propose a contrastive-learning-based Imaging-Pathology Synergistic Alignment (CL-IPSA) framework that aligns MRI and WSI data within a shared embedding space, thereby establishing robust cross-modal semantic correspondences. We further construct a cross-modal mapping library that enables patients with MRI-only data to obtain proxy pathological representations via nearest-neighbor retrieval for joint survival modeling. Experiments across multiple datasets demonstrate that incorporating proxy WSI features consistently enhances prediction performance: various convolutional neural networks (CNNs) achieve an average AUC improvement of 0.08-0.10 on the validation cohort and two independent test sets, with SEResNet34 yielding the best performance (AUC = 0.836). Our approach enables non-invasive, preoperative integration of radiological and pathological semantics, substantially improving GBM survival prediction without requiring any additional invasive procedures.