LatAtk: A Medical Image Attack Method Focused on Lesion Areas with High Transferability

LatAtk:一种针对病灶区域且具有高迁移性的医学图像攻击方法

阅读:2

Abstract

The rise in trusted machine learning has prompted concerns about the security, reliability and controllability of deep learning, especially when it is applied to sensitive areas involving life and health safety. To thoroughly analyze potential attacks and promote innovation in security technologies for DNNs, this paper conducts research on adversarial attacks against medical images and proposes a medical image attack method that focuses on lesion areas and has good transferability, named LatAtk. First, based on the image segmentation algorithm, LatAtk divides the target image into an attackable area (lesion area) and a non-attackable area and injects perturbations into the attackable area to disrupt the attention of the DNNs. Second, a class activation loss function based on gradient-weighted class activation mapping is proposed. By obtaining the importance of features in images, the features that play a positive role in model decision-making are further disturbed, making LatAtk highly transferable. Third, a texture feature loss function based on local binary patterns is proposed as a constraint to reduce the damage to non-semantic features, effectively preserving texture features of target images and improving the concealment of adversarial samples. Experimental results show that LatAtk has superior aggressiveness, transferability and concealment compared to advanced baselines.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。