Abstract
Background/Objectives: Multimodal depression detection research has traditionally relied on early or hybrid fusion strategies without systematically analyzing how dynamic fusion mechanisms interact with modality-specific pretraining. Although gated and attention-based architectures are increasingly adopted, their behavior is rarely examined within a structured fusion taxonomy framework. Methods: In this study, we conduct a controlled taxonomy-level evaluation of multimodal fusion strategies in a Japanese PHQ-9-annotated depression dataset. We compare four fusion paradigms (concatenation, summation, gated fusion, and cross-attention) across three integration stages, crossed with modality-specific affective pretraining configurations for visual (CMU-MOSI/MOSEI), acoustic (JTES), and textual (WRIME) encoders, yielding 512 experimental conditions. Results: The results reveal strong position-dependent effects of fusion strategy. Cross-attention fusion at the audio integration stage achieved the highest mean AUC (0.774) and PR-AUC (0.606), with statistically significant superiority over gated and concatenation-based fusion (Kruskal-Wallis H=86.28, p<0.001). In contrast, fusion effects at the text stage were non-significant in AUC but significant in PR-AUC, highlighting metric-sensitive behavior under class imbalance. Pretraining effects were modality-specific: SigLIP initialization produced significant positive transfer (Δ=+0.018, p<0.001), whereas audio pretraining on JTES resulted in negative transfer (Δ=-0.014, p=0.004), suggesting domain mismatch effects. Gate analysis further revealed condition-dependent modality dominance, including cases of semantic-geometric reversal under joint auxiliary augmentation. Conclusions: Our findings suggest that multimodal depression detection systems should not be interpreted through static fusion categories alone. Instead, modality contribution appears to be associated with structured interaction effects between fusion strategy, integration position, and affective pretraining. This work provides a controlled empirical bridge between fusion taxonomy and dynamic modality weighting in clinical multimodal modeling.