Abstract
INTRODUCTION: Depression is a prevalent mental disorder with a severe global impact. Traditional interview-based assessments are limited by subjectivity, lengthy procedures, and unequal access to care. Although advances in AI have facilitated multimodal models for depression detection-using audiovisual data as an accessible alternative to biosignals-current approaches remain challenged by inefficient long-term temporal modeling and superficial multimodal fusion. Moreover, biosignal-based methods are constrained by high costs and narrow applicability. These challenges underscore the urgent need for optimized multimodal solutions. METHODS: This paper proposes ASYM (Attentive Synergy Mamba), a novel multimodal architecture for depression recognition, comprising three core modules: a Cross-Modal Interactive Mamba, a Multi-Scale Gated Parallel Fusion, and a Multimodal Enhanced Mamba. First, features from each modality are interactively enhanced using convolutional neural network and Bi-Mamba blocks. Cross-modal complementary information is then extracted via a cross-attention mechanism. A dual-path fusion module subsequently augments multi-scale representations and integrates cross-modal features through dynamic weighting. Finally, the feature representations are refined by a series of Bi-Mamba blocks. RESULTS: Evaluations on the D-Vlog and LMVD datasets using accuracy, precision, recall, and F1-score showed that ASYM achieved an accuracy of 70.91% and an F1-score of 77.13% on D-Vlog, and 74.68% accuracy with a 74.90% F1-score on LMVD. The macro-average performance across both datasets surpassed all compared mainstream methods. Ablation studies confirmed the necessity of each component, as removing any module significantly degraded performance, underscoring the efficacy and critical contribution of the proposed architecture. DISCUSSION: While multimodal depression detection has improved upon single-modality approaches, issues such as computational inefficiency in long-sequence processing and inadequate fusion strategies persist. Our model addresses these limitations through multimodal interaction and multi-scale feature fusion. Future work will focus on clinical validation across diverse populations to bridge computational psychiatry and clinical practice.