Source camera attribution using a rule-based explainable convolutional neural network

使用基于规则的可解释卷积神经网络进行源相机归属

阅读:1

Abstract

In recent years, there has been a push towards adopting artificial intelligence (AI) models in digital forensics (DF), particularly deep learning (DL) models. While these models assist DF experts, their lack of transparency raises concerns about reliability. Although eXplainable Artificial Intelligence (XAI) has progressed, current methods remain limited for DF applications. Existing visual XAI techniques do not provide sufficient clarity for challenging image forensics tasks such as Source Camera Identification (SCI), nor do they offer mechanisms to assess whether a model’s decision is correct. Most methods simply highlight influential regions without enabling examiners to validate the decision itself. Rule-based explainability is a promising strategy for increasing transparency, yet deploying it on real-world Convolutional Neural Networks (CNNs) is still challenging. Prior studies remain largely experimental and often require modifying the model to extract rules, conflicting with the integrity requirements of DF workflows. To address these gaps, this paper introduces a framework to make CNN models used in the analysis stage of digital forensics explainable. The framework, by following three fundamental steps—layers trace detection, layers majority voting, and rule extraction—provides structured and transparent visual output, and rule-based textual explainability that is understandable to the user. Based on this, the first explainable Source Camera Identification (SCI) model is introduced which is a challenging DF task to make it explainable. The explainable output allows for the rejection or confirmation of the main model’s prediction based on the decisions of the layers and compliance with the principle of integrity to the DF examiner. In addition, with the identification of 27 out of 37 incorrect predictions by the base model, the precision of the model was improved from 97.33% to 99.2%.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。