Multimodal medical image fusion combining saliency perception and generative adversarial network

结合显著性感知和生成对抗网络的多模态医学图像融合

阅读:1

Abstract

Multimodal medical image fusion is crucial for enhancing diagnostic accuracy by integrating complementary information from different imaging modalities. Current fusion techniques face challenges in effectively combining heterogeneous features while preserving critical diagnostic information. This paper presents a Temporal Decomposition Network (TDN), a novel deep learning architecture that optimizes multimodal medical image fusion through feature-level temporal analysis and adversarial learning mechanisms. The TDN architecture incorporates two key components: a salient perception model for discriminative feature extraction and a generative adversarial network for temporal feature matching. The salient perception model identifies and classifies distinct pixel distributions across different imaging modalities, while the adversarial component facilitates accurate feature mapping and fusion. This approach enables precise temporal Decomposition of heterogeneous features and robust quality assessment of fused regions. Experimental validation on diverse medical image datasets, encompassing multiple modalities and image dimensions, demonstrates the TDN's superior performance. Compared to state-of-the-art methods, the framework achieves an 11.378% improvement in fusion accuracy and a 12.441% enhancement in precision. These results indicate significant potential for clinical applications, particularly in radiological diagnosis, surgical planning, and medical image analysis, where multimodal visualization is critical for accurate interpretation and decision-making.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。