Abstract
Multimodal emotion recognition leverages multiple modalities to capture emotional cues more comprehensively, thereby improving the accuracy and robustness of emotion recognition. From the perspective of multimodal data and feature learning, reducing information redundancy in multimodal data and enhancing the discriminability of deep feature co-learning can effectively boost recognition performance. Based on this, this paper proposes a multimodal emotion recognition method based on an Adaptive High-order Transformer Network (AHOT). This method constructs Adaptive Selection Transformer block (AST) and Cross-modal Feature Fusion block (CMFF) for each modality branch, aiming to fully capture non-redundant feature representations from each modality and the interactions between modalities. In addition, a sparse high-order feature learning module is designed to enable the learning of highly discriminative high-order features across modalities. Experimental results on two multimodal emotion recognition datasets (IEMOCAP and CMU-MOSEI) demonstrate that, compared with several related methods, the proposed AHOT effectively improves emotion recognition accuracy. Moreover, ablation studies and parameter analyses further validate the effectiveness of AHOT.