Abstract
Adversarial attacks pose serious challenges for deep neural network (DNN)-based analysis of various input signals. In the case of three-dimensional point clouds, methods have been developed to identify points that play a key role in network decision, and these become crucial in generating existing adversarial attacks. For example, a saliency map approach is a popular method for identifying adversarial drop points, whose removal would significantly impact the network decision. This paper seeks to enhance the understanding of three-dimensional adversarial attacks by exploring which point cloud features are most important for predicting adversarial points. Specifically, Fourteen key point cloud features such as edge intensity and distance from the centroid are defined, and random forest regression and multiple linear regression are employed to assess their predictive power for adversarial points. To analyze the potential of intrinsic point cloud features in generating adversarial attacks, we design an attack method. Unlike traditional attack methods that rely on model-specific vulnerabilities, our approach shifts the focus toward the intrinsic characteristics of the point clouds themselves. The proposed attack is tested across four different DNN architectures-PointNet, PointNet++, Dynamic Graph Convolutional Neural Networks (DGCNN), and Point Convolutional Network (PointConv). While its performance is slightly weaker than model-specific attacks, it consistently outperforms random guessing and demonstrates promising generalizability across different models. and demonstrates improved transferability across different architectures. Specifically, the proposed attack achieves on average about a 2% higher success rate in the Drop100 setting and approximately a 4% higher success rate in the Drop200 setting when transferred between models. Beyond adversarial attacks, this study takes a step toward a new perspective in deep learning by shifting the focus from model-specific gradient-based methods to data-driven, feature-based decision-making. This approach has the potential to reduce computational costs by eliminating the need for repeated backpropagation, paving the way for faster and more interpretable deep learning models. These insights can be applied to various domains, including model explainability, feature selection for robust learning, and designing efficient defense mechanisms against adversarial threats.