Adaptive graph contrastive learning with hard negative mining for multimodal hyperspectral and LiDAR classification

自适应图对比学习结合难负样本挖掘,用于多模态高光谱和激光雷达分类

阅读:1

Abstract

Joint classification of hyperspectral imagery (HSI) and light detection and ranging (LiDAR) data has attracted increasing attention in remote sensing. However, effective multimodal fusion and robust feature modeling remain challenging due to data heterogeneity. Graph neural networks (GNNs) are well suited for modeling non-Euclidean structures and cross-modal relations, but most existing GNN-based methods rely on supervised learning, limiting their applicability in label-scarce scenarios. We propose adaptive graph contrastive learning (AGCL), a self-supervised graph framework for HSI and LiDAR classification. AGCL performs adaptive graph construction through input-conditioned neighborhood selection and learns dynamic affinity matrices for flexible message passing. A hard negative mining strategy constructs informative negative samples for contrastive learning. During self-supervised pretraining, AGCL jointly optimizes intra-modal consistency, cross-modal alignment, and graph topology reconstruction without labeled data. The learned representations are then transferred to downstream classification via supervised fine-tuning. Experiments on three benchmark datasets demonstrate the effectiveness of the proposed framework.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。