Accuracy of large language model transcription of simulated physician-patient verbal interactions

大型语言模型对模拟医患语言互动进行转录的准确性

阅读:2

Abstract

BACKGROUND: Large language models (LLM) are increasingly used in clinical medicine for tasks such as automated note generation. However, LLM-generated notes remain vulnerable to transcription errors, raising concerns about their reliability in clinical practice. We analyzed the types and rates of LLM mis-transcription errors (deletions, substitutions, and additions) and LLM mis-attribution errors (assigning dialogue to an incorrect speaker) in transcripts generated by a single LLM and tested whether error rates differed by speaker role and speaker sex. We also examined plausible sources of LLM-related error, including overlapping speech and speaking turn taking, and hypothesized that higher-quality audio would be associated with fewer transcription errors. METHODS: In this retrospective single center study, an LLM (NotebookLM) generated speaker-labeled transcripts from audio recordings of twelve standardized-patient (SP) medical student encounters involving three SPs in a single simulated clinical scenario. Six encounters were re-recorded with higher-fidelity audio (HFA) to evaluate the effect of recording quality on errors. LLM-generated transcripts were compared with gold-standard transcripts. Outcomes included target word errors (substitutions and deletions), insertions, turn-taking errors, mis-attributed speaker word errors, semantic errors (errors that changed the meaning of a word or phrase), medical terminology errors, speaking turns, and overlapping speech. RESULTS: Interactions averaged 2,226 ± 252 words and the mean transcription error frequencies were 73 ± 26 target word errors (3.3% of target words), 22 ± 13 substitutions (1.0%), 51 ± 22 deletions (2.3%), and 9 ± 4 insertions (0.4%). There were 19 ± 5 semantic word/phrase errors, of which 8 ± 4 were due to medical terminology errors. For speaker attribution accuracy, there were 15 ± 12 turn-taking errors (7.3% of all speaking turns), 5 ± 6 semantic turn-taking errors (errors that altered meaning), and 48 ± 39 mis-attributed-speaker word errors. Overlapping speech accounted for 19.1% of total word errors and 16.3% of mis-attributed speaker word errors. Speaking turns were correlated with target word errors and insertions (r = 0.41), turn-taking errors (r = 0.62), and mis-attributed speaker word errors (r = 0.51). HFA recordings reduced but did not eliminate the errors. CONCLUSIONS: In this single simulated clinical scenario involving three SPs and 12 SP–student interactions, overlapping speech, turn-taking, medical terminology, and audio fidelity were frequent contributors to transcription errors with NotebookLM. Though LLM transcription errors were modest, even small numbers of errors can have a meaningful impact on documentation. These findings suggest that caution is warranted when relying on LLMs for fully autonomous clinical note generation. LLMs may be most appropriately used in a supportive role, such as assisting clinicians in reviewing and improving physician-authored documentation, rather than replacing clinician involvement in the documentation process. GRAPHICAL ABSTRACT: [Image: see text] SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12911-026-03414-3.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。