Abstract
In this paper, we propose a novel speech emotion recognition model, named MAGTF-Net (Multi-scale Attention Graph Transformer Fusion Network), which addresses the challenges faced by traditional hand-crafted feature-based approaches in modeling complex emotional nuances and dynamic contextual dependencies. Although existing state-of-the-art methods have achieved improvements in recognition performance, they often fail to simultaneously capture both local acoustic features and global temporal structures, and they lack adaptability to variable-length speech utterances, thereby limiting their accuracy and robustness in recognizing complex emotional expressions. To tackle these challenges, we design a log-Mel spectrogram feature extraction branch that combines a Multi-scale Attention Graph (MAG) structure with a Transformer encoder, where the Transformer module adaptively performs dynamic modeling of speech sequences with varying lengths. In addition, a low-level descriptor (LLD) feature branch is introduced, where a multilayer perceptron (MLP) is employed for complementary feature modeling. The two feature branches are fused and subsequently classified through a fully connected layer, further enhancing the expressive capability of emotional representations. Moreover, a label-smoothing-enhanced cross-entropy loss function is adopted to improve the model's recognition performance on difficult-to-classify emotional categories. Experiments conducted on the IEMOCAP dataset demonstrate that MAGTF-Net achieves weighted accuracy (WA) and unweighted accuracy (UA) scores of 69.15% and 70.86%, respectively, outperforming several baseline models. Further ablation studies validate the significant contributions of each module in the Mel-spectrogram branch and the LLD feature branch to the overall performance improvement. The proposed method effectively integrates local, global, and multi-source feature information, significantly enhancing the recognition of complex emotional expressions and providing new theoretical and practical insights for the field of speech emotion recognition.