The shallowest transparent and interpretable deep neural network for image recognition

用于图像识别的最浅层、透明且可解释的深度神经网络

阅读:2

Abstract

Trusting the decisions of deep learning models requires transparency of their reasoning process, especially for high-risk decisions. In this paper, a fully transparent deep learning model (Shallow-ProtoPNet) is introduced. This model consists of a transparent prototype layer, followed by an indispensable fully connected layer that connects prototypes and logits, whereas usually, interpretable models are not fully transparent because they use some black-box part as their baseline. This is the difference between Shallow-ProtoPNet and prototypical part network (ProtoPNet), the proposed Shallow-ProtoPNet does not use any black box part as a baseline, whereas ProtoPNet uses convolutional layers of black-box models as the baseline. On the dataset of X-ray images, the performance of the model is comparable to the other interpretable models that are not completely transparent. Since Shallow-ProtoPNet has only one (transparent) convolutional layer and a fully connected layer, it is the shallowest transparent deep neural network with only two layers between the input and output layers. Therefore, the size of our model is much smaller than that of its counterparts, making it suitable for use in embedded systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。