Neural collapse (NC) reveals that the last layer of the network can capture data representations, leading to similar outputs for examples within the same class, while outputs for examples from different classes form a simplex equiangular tight frame (ETF) structure. This phenomenon has garnered significant attention due to its implications on the intrinsic properties of neural networks. Interestingly, we observe a simplex compression phenomenon in NC, where the geometric size of the simplex ETF reduces under adversarial training, with the degree of compression increasing as the perturbation radius grows. We provide empirical evidence supporting the existence of simplex compression across a wide range of models and datasets. Furthermore, we establish a rigorous theoretical framework that explains our experimental observations, offering insights into NC under adversarial conditions.
Prevalence of simplex compression in adversarial deep neural networks.
阅读:3
作者:Cao Yang, Chen Yanbo, Liu Weiwei
| 期刊: | Proceedings of the National Academy of Sciences of the United States of America | 影响因子: | 9.100 |
| 时间: | 2025 | 起止号: | 2025 Apr 29; 122(17):e2421593122 |
| doi: | 10.1073/pnas.2421593122 | ||
特别声明
1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。
2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。
3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。
4、投稿及合作请联系:info@biocloudy.com。
