Abstract
Visual Question Answering (VQA) effectively integrates image and text information to provide accurate answers to user queries. Despite the advances in multi-modal learning, traditional multi-head attention models struggle with limited interaction between attention heads and the inability to capture positional information, which are critical for modeling both intra-modal and cross-modal connections. In this work, we propose a novel position-aware collaborative attention framework to address these challenges. Our framework introduces an Inter-Head Communication Matrix (IHCM) before and after normalization in multi-head attention, enabling effective information sharing across attention heads. We design two collaborative attention components, i.e., the Intra-modal Self-Attention with Collaboration (IMSAC) for refining single-modality features and the Cross-modal Guided Attention with Collaboration (CMGAC) for leveraging textual information to guide image attention. To further enhance positional awareness, absolute positional encoding is incorporated into the self-attention mechanism, significantly improving semantic understanding in text features. We evaluate our framework on the TDIUC, VQA-CP v2, and GQA datasets to demonstrate its effectiveness and robustness. Our collaborative attention block consistently improves accuracy across various question categories, with the IMSAC and CMGAC combination achieving the best results. Comprehensive ablation studies confirm the importance of inter-head collaboration and positional encoding, highlighting their contributions to addressing the "semantic gap" and enhancing cross-modal reasoning. The proposed framework achieves competitive and superior performance compared to several recent attention-based methods, showcasing superior resilience to language bias and strong compositional reasoning capabilities.