Defending against and generating adversarial examples together with generative adversarial networks

利用生成对抗网络防御和生成对抗样本

阅读:1

Abstract

Although deep neural networks have achieved great success in many tasks, they encounter security threats and are often fooled by adversarial examples, which are created by making slight modifications to pixel values. To address these problems, a novel DG-GAN framework is proposed, integrating generator, encoder, and discriminator, to defend against and generate adversarial examples with generative adversarial networks. Under the DG-GAN framework, we establish the relationship between defending against and generating adversarial examples by bidirectional mapping from images to adversarial examples, which means that we can not only use the generator to defend against adversarial examples, but also use the encoder to generate adversarial examples without gradient information. Moreover, the proposed DG-GAN can be used with any classification model and does not modify the classifier structure or the training procedure. We design a series of experiments to validate the DG-GAN framework. According to the results, as a defense method, DG-GAN effectively defends against different attacks and improves on existing defense strategies. On the other hand, DG-GAN also serves as a black-box attack, which has similar attack performance to existing attack methods.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。