Completeness and Quality of Neurology Referral Letters Generated by a Large Language Model for Standardized Scenarios

基于大型语言模型的标准化场景下神经科转诊信的完整性和质量

阅读:1

Abstract

Background and Objectives: Large language models (LLMs) offer promising applications in healthcare, including drafting referral letters. However, access to LLMs specifically designed for medical practice remains limited. While ChatGPT is widely available, its ability to generate comprehensive and clinically appropriate neurology referral letters remains uncertain. This study aimed to systematically evaluate the completeness and quality of neurology referral letters generated by ChatGPT for standardized clinical scenarios. Materials and Methods: Five standardized clinical scenarios representing common neurological complaints encountered in family medicine settings (headache, memory problems, stroke/TIA, tremor, radiculopathy) were used. Using a consistent prompt, ChatGPT (GPT-4o, 2025 release) generated 10 referral letters per scenario (50 letters in total). A dual board-certified neurologist and family physician scored the letters using a 30-point rubric across multiple domains: completeness (demographics, chief complaint, history of present illness, physical exam findings, management, and consultation questions) and quality (language level, structure, and letter length). Descriptive statistics and inferential analyses (ANOVA and Kruskal-Wallis tests) were applied to assess performance across scenarios. Results: The mean total score was 25.76/30 (95% CI: 24.85-26.67). Completeness averaged 87%, while language and structure consistently scored above 90%. Content gaps appeared in 36 out of 50 letters (72%), mainly in the history of present illness and physical examination sections. Variability was observed across letters, though not statistically significant between scenarios (ANOVA: F = 1.14, p = 0.352; Kruskal-Wallis: H = 3.52, p = 0.475). Conclusions: ChatGPT produced neurology referral letters of high linguistic quality but variable completeness, especially for clinically complex content. The variability pattern among letters reflected model inconsistency rather than case type. The reliance on a single rater and use of a non-validated rubric represent limitations. Future studies should include multiple raters, inter-rater reliability testing, and validated scoring frameworks. Ultimately, access to tailored LLMs exclusively trained for medical documentation could improve outcomes while safeguarding patient privacy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。