Minimally perturbed adversarial examples were shown to drastically reduce the performance of one-stage classifiers while being imperceptible. This paper investigates the susceptibility of hierarchical classifiers, which use fine and coarse level output categories, to adversarial attacks. We formulate a program that encodes minimax constraints to induce misclassification of the coarse class of a hierarchical classifier (e.g., changing the prediction of a 'monkey' to a 'vehicle' instead of some 'animal'). Subsequently, we develop solutions based on convex relaxations of said program. An algorithm is obtained using the alternating direction method of multipliers with competitive performance in comparison with state-of-the-art solvers. We show the ability of our approach to fool the coarse classification through a set of measures such as the relative loss in coarse classification accuracy and imperceptibility factors. In comparison with perturbations generated for one-stage classifiers, we show that fooling a classifier about the 'big picture' requires higher perturbation levels which results in lower imperceptibility. We also examine the impact of different label groupings on the performance of the proposed attacks. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00034-022-02226-w.
Fooling the Big Picture in Classification Tasks.
阅读:6
作者:Alkhouri Ismail, Atia George, Mikhael Wasfy
| 期刊: | Circuits Systems and Signal Processing | 影响因子: | 2.000 |
| 时间: | 2023 | 起止号: | 2023;42(4):2385-2415 |
| doi: | 10.1007/s00034-022-02226-w | ||
特别声明
1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。
2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。
3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。
4、投稿及合作请联系:info@biocloudy.com。
