Abstract
Address parsing seeks to map noisy, abbreviated free-text addresses into standardized hierarchical tuples for large-scale information systems. Existing approaches struggle with semantic and structural ambiguity, hallucination from unconstrained generation, and deployment constraints under privacy and governance requirements. We present AddrKG-LLM, a two-stage framework that combines knowledge-graph (KG)-aware retrieval with schema-restricted large language model (LLM) decoding. First, contrastive learning over multi-view administrative graphs yields node embeddings that retrieve and re-rank a compact Top-K candidate set, thereby bounding the search space while preserving high gold coverage (Recall@K). Second, a candidate-restricted decoder running on-premises produces JSON-compliant outputs, enforcing single-candidate field consistency and alignment with KG priors to improve controllability and policy compliance. Using de-identified real-world records, we evaluate structural consistency via micro-level accuracy ([Formula: see text]) and macro-level accuracy ([Formula: see text]), and assess system properties with Recall@K and latency. Across strong string-matching, sequence-labeling, and generic LLM baselines, AddrKG-LLM delivers consistent gains in [Formula: see text] and [Formula: see text] with a favorable Recall@K. The proposed method consists of three components: (i) multi-view graph aggregation, (ii) a hierarchy-aware self-supervised contrastive objective that derives positives/negatives from administrative relations to align textual and graph embeddings, and (iii) candidate-restricted decoding within the KG-derived Top-K set. Overall, coupling KG-aware retrieval with constrained on-prem LLM decoding yields an accurate, controllable, and deployable solution for noisy-address structuring across domains.