From EHR chaos to clinical clarity: enhancing foundational encoders with task-specific attention for domain-oriented representation learning in downstream clinical applications

从电子病历混乱到临床清晰:利用任务特定注意力机制增强基础编码器,以实现下游临床应用中的领域导向表征学习

阅读:1

Abstract

Electronic health records (EHRs) contain rich clinical information; however, they pose challenges for representation learning due to long free-text notes, domain shift, irregular structure, incomplete, and sparse fields. To address these challenges, we propose a lightweight, encoder-agnostic framework that segments each record into clinically meaningful sections, encodes them with a shared foundation-model encoder, stabilizes features via upper-layer mixing, and aggregates sections with a task-specific attention head cast as a permutation-invariant set function. This section-aware design improves truncation from token limits without specialized long-text engineering and focuses the model on task-relevant evidence. We demonstrate consistent improvements across seven baseline encoders and three downstream applications, i.e., disease prediction, clustering, and representational digital-twin (RDT) retrieval. For prediction on large corpora, section-aware aggregation improves accuracy and F1 and shifts per-disease ROC curves toward higher AUC. Clustering quality increases across ARI, homogeneity, completeness, and V-measure, indicating more coherent patient strata. In RDT retrieval, neighborhoods become more label-consistent (higher homogeneity and Concordance) while maintaining high nearest-neighbor similarity. Ablations show that mixing upper transformer layers combined with task-specific section attention improves performance and reduces cross-seed variance with a modest calibration trade-off. Overall, our framework produces portable, interpretable patient embeddings that may help in various downstream analytics and decision support in real-world clinical settings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。