Enhancing DNN Adversarial Robustness via Dual Stochasticity and Geometric Normalization

通过双重随机性和几何归一化增强深度神经网络对抗鲁棒性

阅读:1

Abstract

Deep neural networks (DNNs) have achieved remarkable progress across various domains, yet they remain highly vulnerable to adversarial attacks, which significantly hinder their deployment in safety-critical applications. While stochastic defenses have shown promise, most existing approaches rely on fixed noise injection and fail to account for the geometric stability of the decision space. To address these limitations, we introduce a novel framework, which named as Dual Stochasticity and Geometric Normalization (DSGN). Specifically, DSGN incorporates learnable, input-dependent Gaussian noise into both the feature representation and classifier weights, creating a dual-path stochastic modeling mechanism that captures multi-level predictive uncertainty. To enhance decision consistency, both noisy components are projected onto a unit hypersphere via 𝓁2 normalization, constraining the logit space and promoting angular margin separation. This design stabilizes both the representation and decision geometry, leading to more stable decision boundaries and improved robustness. We evaluate the effectiveness of DSGN on several benchmark datasets and CNNs. Our results indicate that DSGN achieves a robust accuracy improvement of approximately 1% to 6% over the state-of-the-arts baseline model under PGD and 1% to 17% improvement under AutoAttack, demonstrating its effectiveness in enhancing adversarial robustness while maintaining high clean accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。