Abstract
The medical image fusion is a critical application in medical diagnosis, where anatomical and functional information from different imaging modalities, e.g., Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) can be integrated. However, edge preservation, texture richness and structure consistency are a major challenge in complex fusion scenarios. This paper presents a novel multimodal medical image fusion technique based on the Contourlet Transform for multiscale directional decomposition and mean curvature filter for edge preservation. The proposed approach decomposes the source images into low- frequency and high-frequency components via a three-level Contourlet Transform. The low-frequency layers are fused via weighted averaging for brightness consistency, while the detail layers are processed by the mean curvature filter and then fused via maximum absolute selection to maintain edges and texture. The approach was evaluated against a variety of multimodal medical image datasets with consistent improvements against conventional methods such as Guided Filter Fusion (GFF), Laplacian Pyramid (LP), and Discrete Wavelet Transform (DWT). Experimental results showed average improvement of 19.4% in Spatial Frequency (SF), 17.6% in Average Gradient (AG), and 13.2% in Entropy (EN) over baseline methods. The results demonstrate that the method is useful for medical applications such as brain tumor localization, tissue differentiation, and surgery planning where high fidelity within fused imaging is critical.