Interaction-Driven Dynamic Fusion for Multimodal Depression Detection: A Controlled Analysis of Gating and Cross-Attention Under Class Imbalance

基于交互驱动的动态融合用于多模态抑郁症检测:类别不平衡条件下门控和交叉注意力的受控分析

阅读:1

Abstract

Background/Objectives: Multimodal depression detection research has traditionally relied on early or hybrid fusion strategies without systematically analyzing how dynamic fusion mechanisms interact with modality-specific pretraining. Although gated and attention-based architectures are increasingly adopted, their behavior is rarely examined within a structured fusion taxonomy framework. Methods: In this study, we conduct a controlled taxonomy-level evaluation of multimodal fusion strategies in a Japanese PHQ-9-annotated depression dataset. We compare four fusion paradigms (concatenation, summation, gated fusion, and cross-attention) across three integration stages, crossed with modality-specific affective pretraining configurations for visual (CMU-MOSI/MOSEI), acoustic (JTES), and textual (WRIME) encoders, yielding 512 experimental conditions. Results: The results reveal strong position-dependent effects of fusion strategy. Cross-attention fusion at the audio integration stage achieved the highest mean AUC (0.774) and PR-AUC (0.606), with statistically significant superiority over gated and concatenation-based fusion (Kruskal-Wallis H=86.28, p<0.001). In contrast, fusion effects at the text stage were non-significant in AUC but significant in PR-AUC, highlighting metric-sensitive behavior under class imbalance. Pretraining effects were modality-specific: SigLIP initialization produced significant positive transfer (Δ=+0.018, p<0.001), whereas audio pretraining on JTES resulted in negative transfer (Δ=-0.014, p=0.004), suggesting domain mismatch effects. Gate analysis further revealed condition-dependent modality dominance, including cases of semantic-geometric reversal under joint auxiliary augmentation. Conclusions: Our findings suggest that multimodal depression detection systems should not be interpreted through static fusion categories alone. Instead, modality contribution appears to be associated with structured interaction effects between fusion strategy, integration position, and affective pretraining. This work provides a controlled empirical bridge between fusion taxonomy and dynamic modality weighting in clinical multimodal modeling.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。