Abstract
Medical image fusion integrates complementary information from multimodal medical images to provide comprehensive references for clinical decision-making, such as the diagnosis of Alzheimer's disease and the detection and segmentation of brain tumors. Although traditional and deep learning-based fusion methods have been extensively studied, they often fail to devise targeted strategies that fully utilize distinct regional or feature-specific information. This paper proposes SAFFusion, a saliency-aware frequency fusion network that integrates intensity and texture cues from multimodal medical images. We first introduce Mamba-UNet, a multiscale encoder-decoder architecture enhanced by the Mamba design, to improve global modeling in feature extraction and image reconstruction. By employing the contourlet transform in Mamba-UNet, we replace conventional pooling with multiscale representations and decompose spatial features into high- and low-frequency subbands. A dual-branch frequency feature fusion module then fuses cross-modality information according to the distinct characteristics of these frequency subbands. Furthermore, we apply latent low-rank representation (LatLRR) to assess image saliency and implement adaptive loss constraints to preserve information in salient and non-salient regions. Quantitative results on CT/MRI, SPECT/MRI, and PET/MRI fusion tasks show that SAFFusion outperforms state-of-the-art methods. Qualitative evaluations confirm that SAFFusion effectively merges prominent intensity features and rich textures from multiple sources.