Abstract
BACKGROUND: International Classification of Diseases (ICD) coding is essential for health insurance reimbursement, healthcare delivery, and public health management, supporting quality assessment, cost control, and clinical research. Traditional ICD coding relies on manual processes that are labor-intensive, time-consuming, and prone to human error. Large language models (LLMs) offer a promising approach for automating medical record coding; however, their clinical application is limited by the complexity of medical records and the highly specialized nature of clinical knowledge. OBJECTIVE: This study aims to evaluate the effects of different knowledge-based prompting strategies on LLMs' ICD coding performance, identify optimal combinations of models and prompts, and assess their effectiveness in real-world medical record coding tasks. METHODS: A total of 800 discharge summaries from the Department of Urology at the First Affiliated Hospital of Soochow University, dated between 1 January and 31 May 2025, were randomly selected to construct a standardized dataset. The study was conducted in two stages. First, five prompting strategies were evaluated using GPT-4o across primary diagnosis, secondary diagnosis, and surgical procedure coding to identify the optimal strategy. Second, this strategy was applied to multiple LLMs to compare coding performance. RESULTS: Contextual prompting tailored to medical specialties achieved the best performance with GPT-4o, with accuracies of 84%, 85%, and 82% for the three coding tasks. Using this strategy, DeepSeek-V3 achieved the highest overall performance, with accuracies of 89.5%, 88.6%, and 93.3%, respectively. CONCLUSION: An integrated framework combining contextual prompting with DeepSeek-V3 substantially improves automated ICD coding accuracy and efficiency, demonstrating strong potential for clinical application.