Abstract
Electronic health records (EHRs) contain rich clinical information; however, they pose challenges for representation learning due to long free-text notes, domain shift, irregular structure, incomplete, and sparse fields. To address these challenges, we propose a lightweight, encoder-agnostic framework that segments each record into clinically meaningful sections, encodes them with a shared foundation-model encoder, stabilizes features via upper-layer mixing, and aggregates sections with a task-specific attention head cast as a permutation-invariant set function. This section-aware design improves truncation from token limits without specialized long-text engineering and focuses the model on task-relevant evidence. We demonstrate consistent improvements across seven baseline encoders and three downstream applications, i.e., disease prediction, clustering, and representational digital-twin (RDT) retrieval. For prediction on large corpora, section-aware aggregation improves accuracy and F1 and shifts per-disease ROC curves toward higher AUC. Clustering quality increases across ARI, homogeneity, completeness, and V-measure, indicating more coherent patient strata. In RDT retrieval, neighborhoods become more label-consistent (higher homogeneity and Concordance) while maintaining high nearest-neighbor similarity. Ablations show that mixing upper transformer layers combined with task-specific section attention improves performance and reduces cross-seed variance with a modest calibration trade-off. Overall, our framework produces portable, interpretable patient embeddings that may help in various downstream analytics and decision support in real-world clinical settings.