Partial contrastive point cloud self-supervised representation learning

部分对比点云自监督表征学习

阅读:1

Abstract

Annotating 3D point cloud data is labor-intensive. Self-supervised representation learning can reduce the intense demand of manual annotation. However, the sparsity of point cloud, while containing rich geometric structural information, makes the self-supervised representation learning of point clouds more difficult than that of 2D images, especially for point cloud contrastive learning. Recent works employ simple augmentations on point clouds to construct contrastive pairs, but they overlook the geometry structure point cloud data, leading to degraded quality in contrastive views. To compensate the insufficiency in contrasting contrastive pairs, we propose a novel contrastive learning approach to delve deeply into the intrinsic geometric structure of point clouds, termed partial contrastive learning. Specifically, we apply a mask to a portion of the structure in a point cloud sample, while preserving the integrity of the structure in another point cloud. By comparing the structure variations between these point clouds, the model is enabled to encode the geometric information into the self-supervised representations, thereby enhancing the model to maximize the similarity among features that exhibit similar structures. We pretrain our model on the ShapeNet dataset and evaluate its transferability to tasks including classification, segmentation, and few-shot classification. Our method achieves a 90.94% linear SVM accuracy with contrastive training alone, outperforming ToThePoint by 0.91% in point cloud self-supervised learning. Additionally, our method demonstrates superior performance in segmentation and few-shot classification compared to Point-BERT.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。