Abstract
Organizations increasingly integrate and share person-level data across internal platforms and external partners to enable analytics, digital services, and evidence-based decision making. However, combining quasi-identifiers across systems and releases can enable re-identification via linkage attacks, creating regulatory compliance and trust risks. This paper proposes an operational methodology for (i) identifying direct identifiers and quasi-identifiers (QIs), (ii) quantifying baseline re-identification risk using uniqueness and prosecutor-style risk proxies, and (iii) applying Local Differential Privacy (LDP) to reduce link-ability prior to data sharing. We implement categorical LDP using a Generalized Randomized Response (GRR) mechanism and evaluate privacy-utility trade-offs through a sensitivity analysis over the privacy budget ε. Utility is quantified using (a) distributional distortion (total variation distance) and (b) downstream task performance (job-title classification). We further address reviewer concerns by discussing repeated releases, privacy accounting as mitigations for longitudinal deployments, and by improving figure readability and updating related work with recent studies.