Abstract
In recent years, deep learning networks have been widely employed for point cloud classification. However, discrepancies between training and testing scenarios often result in erroneous predictions. Domain generalization (DG) aims to achieve high classification accuracy in unseen scenarios without requiring additional training. Although current DG methodologies effectively employ data augmentation and representation learning, they inadvertently neglect a key component: discriminative feature selection, which we identify as a crucial missing element for achieving robust domain generalization. To fully leverage the geometric features of point clouds, we propose a novel domain generalization method that emphasizes transferring contextual information to improve generalization performance for 3D point clouds. Our method projects the point cloud into multiple views and employs a 2D adaptive feature extractor to capture and aggregate weighted semantic features, while leveraging the DGCNN network to extract 3D spatial geometric features. Additionally, we incorporate an attention mechanism to fuse 2D semantic features with 3D geometric features, facilitating the selection of discriminative features from point clouds. The experiments demonstrate that our method outperforms state-of-the-art methods in both multi-source and single-source tasks, achieving superior generalization performance.