Patient address parsing via KG-aware contrastive learning and constrained on-prem LLM inference

基于知识图谱感知对比学习和受限本地LLM推理的患者地址解析

阅读:1

Abstract

Address parsing seeks to map noisy, abbreviated free-text addresses into standardized hierarchical tuples for large-scale information systems. Existing approaches struggle with semantic and structural ambiguity, hallucination from unconstrained generation, and deployment constraints under privacy and governance requirements. We present AddrKG-LLM, a two-stage framework that combines knowledge-graph (KG)-aware retrieval with schema-restricted large language model (LLM) decoding. First, contrastive learning over multi-view administrative graphs yields node embeddings that retrieve and re-rank a compact Top-K candidate set, thereby bounding the search space while preserving high gold coverage (Recall@K). Second, a candidate-restricted decoder running on-premises produces JSON-compliant outputs, enforcing single-candidate field consistency and alignment with KG priors to improve controllability and policy compliance. Using de-identified real-world records, we evaluate structural consistency via micro-level accuracy ([Formula: see text]) and macro-level accuracy ([Formula: see text]), and assess system properties with Recall@K and latency. Across strong string-matching, sequence-labeling, and generic LLM baselines, AddrKG-LLM delivers consistent gains in [Formula: see text] and [Formula: see text] with a favorable Recall@K. The proposed method consists of three components: (i) multi-view graph aggregation, (ii) a hierarchy-aware self-supervised contrastive objective that derives positives/negatives from administrative relations to align textual and graph embeddings, and (iii) candidate-restricted decoding within the KG-derived Top-K set. Overall, coupling KG-aware retrieval with constrained on-prem LLM decoding yields an accurate, controllable, and deployable solution for noisy-address structuring across domains.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。