Exploring the uncertainty principle in neural networks through binary classification

通过二元分类探索神经网络中的不确定性原理

阅读:1

Abstract

Neural networks are reported to be vulnerable under minor and imperceptible attacks. The underlying mechanism and quantitative measure of the vulnerability still remains to be revealed. In this study, we explore the intrinsic trade-off between accuracy and robustness in neural networks, framed through the lens of the "uncertainty principle". By examining the fundamental limitations imposed by this principle, we reveal how neural networks inherently balance precision in feature extraction with susceptibility to adversarial perturbations. Our analysis highlights that as neural networks achieve higher accuracy, their vulnerability to adversarial attacks increases, a phenomenon rooted in the uncertainty relation. By using the mathematics from quantum mechanics, we offer a theoretical foundation and analytical method for understanding the vulnerabilities of deep learning models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。