Adversarial vulnerability assessment of vision language models for healthcare

医疗保健视觉语言模型的对抗性脆弱性评估

阅读:1

Abstract

BACKGROUND: Vision language models (VLMs) are increasingly integrated into medical workflows for diagnostic support and clinical decision-making. While recent studies have demonstrated susceptibility of proprietary VLMs to prompt injection attacks in medical contexts [1], the security landscape of domain-specific medical VLMs remains largely unexplored. This study comprehensively evaluates the vulnerability of multiple VLMs to both prompt injection and adversarial perturbation attacks [2], investigating white-box attacks on MedGemma and black-box transfer attacks across medical-domain and proprietary models. METHODS: We conducted systematic vulnerability assessment using medical images with histologically confirmed malignant lesions spanning multiple modalities: CT, MRI, ultrasound, pathology, endoscopy, and dermatology (n = 18 cases, 3 per modality). For prompt injection, we embedded malicious instructions within text prompts and visual elements. For adversarial perturbations, we used Projected Gradient Descent and optimization-based methods. White-box attacks utilized full model access to MedGemma, while black-box attacks employed transfer-based methods using surrogate models (OpenCLIP, BiomedCLIP, BLIP). RESULTS: MedGemma achieved the lowest prompt injection vulnerability (38% ASR), followed by Claude 4 Sonnet (48%), GPT-5 (57%), and Claude 4.1 Opus (69%), suggesting domain-specific medical training enhances resistance. For adversarial perturbations, white-box attacks on MedGemma exceeded 80% ASR. Black-box transfer attacks showed varying vulnerability: GPT-5 (44%), MedGemma (37%), Claude 4.1 Opus (17%), and Claude 4 Sonnet (6%). Vulnerability rankings differed notably between attack modalities. CONCLUSIONS: This study provides the first comparative security assessment across medical-domain and proprietary VLMs. Results reveal complex vulnerability patterns with no single model providing universal robustness across different attack vectors. These findings emphasize that robust medical AI security requires comprehensive, multi-layered defenses targeting both text-based and image-based attack vectors, with model-specific threat considerations for medical applications. REFERENCES: 1. Clusmann J. et al. ‘Prompt injection attacks on vision language models in oncology.’ Nature Communications 2025;16:1239. 2. Hirano H., Minagi A. and Takemoto K. ‘Universal adversarial attacks on deep neural networks for medical image classification.’ BMC Medical Imaging 2021; 21:9.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。