Abstract
PURPOSE: Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach. APPROACH: We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process. RESULTS: Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, L2 , L∞ , Peak Signal to Noise Ratio, and Structure Similarity Index Measure. CONCLUSIONS: The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.