An Mcformer encoder integrating Mamba and Cgmlp for improved acoustic feature extraction

一种集成了Mamba和Cgmlp的Mcformer编码器,用于改进声学特征提取

阅读:1

Abstract

Currently, attention models based on the Conformer architecture have become mainstream in the field of speech recognition due to their integration of self-attention mechanisms and convolutional networks. However, further research indicates that Conformers still exhibit limitations in capturing global information. To address this limitation, the Mcformer encoder is introduced, which incorporates the Mamba module in parallel with multi-head attention blocks to enhance the model's global context processing capabilities. Additionally, a Convolutional Gated Multilayer Perceptron (Cgmlp) structure is employed to improve the extraction of local features through deep convolutional layers. Experimental results on the Aishell-1, Common Voice zh 14 public datasets, and the TED-LIUM 3 English public dataset demonstrate that, without a language model, the Mcformer encoder achieves character error rates (CER) of 4.15%, 4.48%, and 13.28%, 13.06% on the validation and test sets of Aishell-1 and Common Voice zh 14, respectively. When incorporating a language model, the CER further decreases to 3.88%, 4.08%, and 11.89%, 11.29%. On the TED-LIUM 3 English public dataset, the word error rates (WER) for the validation and test sets are 7.26% and 6.95%, respectively, without a language model. These experimental outcomes substantiate the efficacy of Mcformer in enhancing speech recognition performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。