A knowledge-driven framework for surgical safety check integration using speech recognition and speaker verification

基于语音识别和说话人验证的知识驱动型手术安全检查集成框架

阅读:1

Abstract

BACKGROUND: The WHO Surgical Safety Checklist reduces preventable errors, but real operating rooms are characterized by overlapping speech, high ambient noise, similar vocalizations, uncertain speaker identity, and occasional omission of checklist items. ASR-only systems that ignore identity constraints and semantic-temporal structure are therefore prone to misrecognition and incorrect verification. METHOD: We design an integrated verification framework that couples automatic speech recognition and speaker verification (ASR + SV) with a knowledge-driven rule engine derived from the WHO checklist. Multichannel audio is processed by a Conformer-based ASR module and an ECAPA-TDNN speaker verification model, after which a rule layer enforces consistency across the semantic content, speaker role, and checklist phase using an explicit ontology and conflict-resolution rules. The system generates real-time prompts in four states ("pass," "fault," "alarm," "uncertain"). Performance is evaluated primarily in high-fidelity simulated operating-room scenarios with controlled noise levels, speaking distances, and multi-speaker interactions, using word error rate (WER), equal error rate (EER), checklist verification accuracy, and alarm rate. Three configurations are compared on the same held-out sessions-"ASR-only," "ASR + SV," and the full knowledge-driven method-and ablation experiments isolate the contribution of the rule layer. RESULTS: Under medium-to-high noise and multi-speaker interference, the full framework reduced WER from 18.7% to approximately 13.5% and achieved an EER of about 3.1% relative to the ASR-only baseline. Checklist verification accuracy reached 93.8%, while the alarm rate decreased to roughly 2.7%. The knowledge layer corrected errors arising from homophones, accent drift, and role confusion by constraining "role-semantics-process" relations, and maintained robust performance at speaking distances up to 1.5 m and background noise of 60 dB. Residual failures were mainly associated with extreme speech overlap and unseen vocabulary, suggesting that lexicon adaptation and speech separation will be necessary for further gains. CONCLUSION: The proposed knowledge-driven ASR + SV framework jointly addresses semantic correctness and speaker identity while remaining interpretable, auditable, and suitable for embedded deployment. It provides a technical foundation for "time-out" and "operation review" functions in intelligent operating rooms. Because the present validation is based largely on simulated scenarios with limited real-world testing and no formal user or ethical evaluation, future work will focus on clinical pilot studies, integration with electronic medical records and multimodal OR data, and a deeper analysis of privacy, accountability, and workflow acceptance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。