Abstract
BACKGROUND: The WHO Surgical Safety Checklist reduces preventable errors, but real operating rooms are characterized by overlapping speech, high ambient noise, similar vocalizations, uncertain speaker identity, and occasional omission of checklist items. ASR-only systems that ignore identity constraints and semantic-temporal structure are therefore prone to misrecognition and incorrect verification. METHOD: We design an integrated verification framework that couples automatic speech recognition and speaker verification (ASR + SV) with a knowledge-driven rule engine derived from the WHO checklist. Multichannel audio is processed by a Conformer-based ASR module and an ECAPA-TDNN speaker verification model, after which a rule layer enforces consistency across the semantic content, speaker role, and checklist phase using an explicit ontology and conflict-resolution rules. The system generates real-time prompts in four states ("pass," "fault," "alarm," "uncertain"). Performance is evaluated primarily in high-fidelity simulated operating-room scenarios with controlled noise levels, speaking distances, and multi-speaker interactions, using word error rate (WER), equal error rate (EER), checklist verification accuracy, and alarm rate. Three configurations are compared on the same held-out sessions-"ASR-only," "ASR + SV," and the full knowledge-driven method-and ablation experiments isolate the contribution of the rule layer. RESULTS: Under medium-to-high noise and multi-speaker interference, the full framework reduced WER from 18.7% to approximately 13.5% and achieved an EER of about 3.1% relative to the ASR-only baseline. Checklist verification accuracy reached 93.8%, while the alarm rate decreased to roughly 2.7%. The knowledge layer corrected errors arising from homophones, accent drift, and role confusion by constraining "role-semantics-process" relations, and maintained robust performance at speaking distances up to 1.5 m and background noise of 60 dB. Residual failures were mainly associated with extreme speech overlap and unseen vocabulary, suggesting that lexicon adaptation and speech separation will be necessary for further gains. CONCLUSION: The proposed knowledge-driven ASR + SV framework jointly addresses semantic correctness and speaker identity while remaining interpretable, auditable, and suitable for embedded deployment. It provides a technical foundation for "time-out" and "operation review" functions in intelligent operating rooms. Because the present validation is based largely on simulated scenarios with limited real-world testing and no formal user or ethical evaluation, future work will focus on clinical pilot studies, integration with electronic medical records and multimodal OR data, and a deeper analysis of privacy, accountability, and workflow acceptance.