MAGTF-Net: Dynamic Speech Emotion Recognition with Multi-Scale Graph Attention and LLD Feature Fusion

MAGTF-Net:基于多尺度图注意力机制和LLD特征融合的动态语音情感识别

阅读:1

Abstract

In this paper, we propose a novel speech emotion recognition model, named MAGTF-Net (Multi-scale Attention Graph Transformer Fusion Network), which addresses the challenges faced by traditional hand-crafted feature-based approaches in modeling complex emotional nuances and dynamic contextual dependencies. Although existing state-of-the-art methods have achieved improvements in recognition performance, they often fail to simultaneously capture both local acoustic features and global temporal structures, and they lack adaptability to variable-length speech utterances, thereby limiting their accuracy and robustness in recognizing complex emotional expressions. To tackle these challenges, we design a log-Mel spectrogram feature extraction branch that combines a Multi-scale Attention Graph (MAG) structure with a Transformer encoder, where the Transformer module adaptively performs dynamic modeling of speech sequences with varying lengths. In addition, a low-level descriptor (LLD) feature branch is introduced, where a multilayer perceptron (MLP) is employed for complementary feature modeling. The two feature branches are fused and subsequently classified through a fully connected layer, further enhancing the expressive capability of emotional representations. Moreover, a label-smoothing-enhanced cross-entropy loss function is adopted to improve the model's recognition performance on difficult-to-classify emotional categories. Experiments conducted on the IEMOCAP dataset demonstrate that MAGTF-Net achieves weighted accuracy (WA) and unweighted accuracy (UA) scores of 69.15% and 70.86%, respectively, outperforming several baseline models. Further ablation studies validate the significant contributions of each module in the Mel-spectrogram branch and the LLD feature branch to the overall performance improvement. The proposed method effectively integrates local, global, and multi-source feature information, significantly enhancing the recognition of complex emotional expressions and providing new theoretical and practical insights for the field of speech emotion recognition.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。