Abstract
OBJECTIVES: Longitudinal data in electronic health records (EHRs) represent an individual's clinical history through a sequence of codified concepts, including diagnoses, procedures, medications, and laboratory tests. Generative pretrained transformers (GPT) can leverage this data to predict future events. While fine-tuning of these models can enhance task-specific performance, it becomes costly when applied to many clinical prediction tasks. In contrast, a pretrained foundation model can be used in zero-shot forecasting setting, offering a scalable alternative to fine-tuning separate models for each outcome. MATERIALS AND METHODS: This study presents the first comprehensive analysis of zero-shot forecasting with GPT-based foundational models in EHRs, introducing a novel pipeline that formulates medical concept prediction as a generative modeling task. Unlike supervised approaches requiring extensive labeled data, our method enables the model to forecast the next medical event purely from a pretraining knowledge. We evaluate performance across multiple time horizons and clinical categories, demonstrating model's ability to capture latent temporal dependencies and complex patient trajectories without task supervision. RESULTS: The model's performance in predicting the next medical concept was evaluated using precision and recall metrics, achieving an average top-1 precision of 0.614 and recall of 0.524. For 12 major diagnostic conditions, the model demonstrated strong zero-shot performance, achieving high true positive rates while maintaining low false positives. DISCUSSION: We demonstrate the power of a foundational EHR GPT model in capturing diverse phenotypes and enabling robust, zero-shot forecasting of clinical outcomes. This capability highlights both its versatility across conditions like liver cancer and SLE, and its limitations in more ambiguous settings such as depression, while also revealing meaningful latent clinical structure. CONCLUSION: This capability enhances the versatility of predictive healthcare models and reduces the need for task-specific training, enabling more scalable applications in clinical settings.