Abstract
BACKGROUND: Background Clinical documentation is a major contributor to clinician workload and burnout, with physicians spending more than half of their workday on electronic health record (EHR) tasks. Artificial intelligence (AI)-based speech recognition (ASR) tools promise to reduce this burden by generating draft notes from dictated or conversational clinical encounters. Despite rapid adoption, concerns remain about their real-world accuracy, reliability, and ability to capture clinically relevant information. AIMS: To systematically map the breadth of published evidence reporting on the accuracy, reliability, efficiency, and clinical information capture of ASR systems used in healthcare settings for clinical documentation. METHODS: The scoping review employed the methodology developed by Arksey and O'Malley in 2005 and further expanded by Levac and Colquhoun 2010. Four databases (PubMed, Scopus, Web of Science, and MEDLINE) were searched for studies published between 2008 and 2025. All findings were reported according to PRISMA guidelines for scoping reviews. RESULTS: Of 3,520 records, thirty-two met the inclusion criteria, using benchmarking studies, controlled comparisons, qualitative methods, and retrospective reviews. Across settings, ASR showed substantial accuracy limitations, with word error rates ranging from moderate in dictated notes to very high in conversational and emergency contexts. Common errors included deletions, substitutions, and misrecognition of medication names or brief utterances. Although some studies reported reduced typing burden and improved workflow efficiency, systems frequently missed clinically relevant details. Evidence for improvements in note completeness was mixed, and little research linked system accuracy to patient safety or diagnostic outcomes. CONCLUSION: ASR can reduce typing and improve documentation efficiency, sometimes capturing richer narrative detail. However, frequent and clinically significant errors shaped by linguistic complexity, context, and speaker variation make unsupervised use unsafe. Human oversights remains essential, and continued refinement, rigorous evaluation, and attention to workflow, cognitive burden, and equity are required.