Multimodal emotion recognition via adaptive high-order transforme network

基于自适应高阶变换网络的多模态情感识别

阅读:1

Abstract

Multimodal emotion recognition leverages multiple modalities to capture emotional cues more comprehensively, thereby improving the accuracy and robustness of emotion recognition. From the perspective of multimodal data and feature learning, reducing information redundancy in multimodal data and enhancing the discriminability of deep feature co-learning can effectively boost recognition performance. Based on this, this paper proposes a multimodal emotion recognition method based on an Adaptive High-order Transformer Network (AHOT). This method constructs Adaptive Selection Transformer block (AST) and Cross-modal Feature Fusion block (CMFF) for each modality branch, aiming to fully capture non-redundant feature representations from each modality and the interactions between modalities. In addition, a sparse high-order feature learning module is designed to enable the learning of highly discriminative high-order features across modalities. Experimental results on two multimodal emotion recognition datasets (IEMOCAP and CMU-MOSEI) demonstrate that, compared with several related methods, the proposed AHOT effectively improves emotion recognition accuracy. Moreover, ablation studies and parameter analyses further validate the effectiveness of AHOT.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。