The analysis of transformer end-to-end model in Real-time interactive scene based on speech recognition technology

基于语音识别技术的实时交互场景中Transformer端到端模型的分析

阅读:1

Abstract

In order to face the uncertainty and semantic complexity of speech signals in real-time interactive scenes and achieve more efficient and accurate speech recognition results, this study proposes a Dynamic Adaptive Transformer for Real-Time Speech Recognition (DATR-SR) model. The study is based on public datasets such as Aishell-1, HKUST, LibriSpeech, CommonVoice, and China TV series datasets covering various contexts, and extensive experiments and analysis are carried out. The results show that DATR-SR has excellent adaptability and robust performance in different language environments and dynamic scenes. With the increase of data volume, the character error rate decreases from 5.2 to 2.7%, the reasoning delay is always kept within 15ms, and the resource utilization rate reaches more than 75%, showing efficient computing ability. On the two kinds of datasets, the word error rate is as low as 4.3%, and the accuracy rate is over 91%. Especially in complex contexts, the semantic coherence rate is 92.3% and the speech event recall rate is 91.3%. Compared with other cutting-edge models, DATR-SR is significantly improved in diverse speech event recognition and dynamic scene switching response. This study aims to provide an efficient speech recognition solution for technical developers and service providers in real-time interactive fields such as emotional socialization, online education and intelligent customer service to enhance the user experience and help the intelligent development of industrial applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。