Zero-shot medical event prediction using a generative pretrained transformer on electronic health records

利用生成式预训练Transformer对电子健康记录进行零样本医疗事件预测

阅读:1

Abstract

OBJECTIVES: Longitudinal data in electronic health records (EHRs) represent an individual's clinical history through a sequence of codified concepts, including diagnoses, procedures, medications, and laboratory tests. Generative pretrained transformers (GPT) can leverage this data to predict future events. While fine-tuning of these models can enhance task-specific performance, it becomes costly when applied to many clinical prediction tasks. In contrast, a pretrained foundation model can be used in zero-shot forecasting setting, offering a scalable alternative to fine-tuning separate models for each outcome. MATERIALS AND METHODS: This study presents the first comprehensive analysis of zero-shot forecasting with GPT-based foundational models in EHRs, introducing a novel pipeline that formulates medical concept prediction as a generative modeling task. Unlike supervised approaches requiring extensive labeled data, our method enables the model to forecast the next medical event purely from a pretraining knowledge. We evaluate performance across multiple time horizons and clinical categories, demonstrating model's ability to capture latent temporal dependencies and complex patient trajectories without task supervision. RESULTS: The model's performance in predicting the next medical concept was evaluated using precision and recall metrics, achieving an average top-1 precision of 0.614 and recall of 0.524. For 12 major diagnostic conditions, the model demonstrated strong zero-shot performance, achieving high true positive rates while maintaining low false positives. DISCUSSION: We demonstrate the power of a foundational EHR GPT model in capturing diverse phenotypes and enabling robust, zero-shot forecasting of clinical outcomes. This capability highlights both its versatility across conditions like liver cancer and SLE, and its limitations in more ambiguous settings such as depression, while also revealing meaningful latent clinical structure. CONCLUSION: This capability enhances the versatility of predictive healthcare models and reduces the need for task-specific training, enabling more scalable applications in clinical settings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。