Explainable analysis of infrared and visible light image fusion based on deep learning

基于深度学习的红外和可见光图像融合的可解释分析

阅读:1

Abstract

Explainability is a very active area of research in machine learning and image processing. This paper aims to investigate the explainability of visible light and infrared image fusion technology in order to enhance the credibility of model understanding and application. Firstly, a multimodal image fusion model was proposed based on the advantages of convolutional neural networks (CNN) for local context extraction and Transformer global attention mechanism. Secondly, to enhance the explainability of the model, the Delta Debugging Fuse Image (DDFImage) algorithm was employed for generating local explanatory information. Finally, we gain deeper insights into the internal workings of the model through feature importance analysis of the generated explanatory fusion images. Comparative analysis with other explainability algorithms demonstrates the superior performance of our algorithm. This comprehensive approach not only improves the explainability of the model but also provides more reference for practical application of the model.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。