Abstract
Existing multimodal sentiment analysis (MSA) methods usually adopt fixed convolution kernels or static windows to model features from limited or fixed scales, making it difficult to dynamically model emotional features under different scale combinations. Furthermore, the absence of mechanisms to suppress redundant information in non-linguistic (video and audio) modalities hinders further performance improvements. To address these limitations, we propose a text guided multimodal scale path fusion network (TMSPF-Net). TMSPF-Net contains three main modules: Multi-scale Adaptive Transformer (MAT), Text-guided Conflict Elimination Module (TGCEM), and Channel Fusion Module. MAT captures the interaction of intra-modal and inter-modal through the combination of patches of different sizes and the dual attention mechanism, fully extracting multi-level global and local emotional information. Meanwhile, the adaptive routing module in MAT dynamically optimizes the feature paths through a learnable mechanism, enabling MAT to adaptively select the optimal path and increasing the flexibility of the model when dealing with heterogeneous data. TGCEM leverages multi-scale text-guided dynamic memory in MAT to filter conflicting signals and selectively preserve emotionally salient patterns in non-linguistic modalities, thereby improving the consistency and semantic richness of multimodal representations. Channel Fusion Module fuses the output results of these two modules and inputs them into the pre-trained language model to complete the MSA task. Extensive experiments on the MOSI and MOSEI datasets demonstrate that TMSPF-Net outperforms in most metrics than state-of-the-art methods. The results show that TMSPF-Net effectively guides the learning of non-linguistic modalities, integrates multi-level sentiment features, showing great potential in sentiment analysis.