Abstract
Node classification tasks are predominantly tackled using Graph Neural Networks (GNNs) due to their ability to capture complex node dependencies through message-passing. However, GNNs suffer from several limitations, including high computational costs, memory inefficiency, and the requirement for complete data including both training and test data to achieve robust generalization. These issues make GNNs less suitable for real-world applications and resource-constrained environments. In this work, we address these challenges by leveraging contrastive learning techniques within Multi-Layer Perceptrons (MLPs) to effectively capture both local and global graph structure information. Our proposed framework incorporates three contrastive learning strategies that enable MLPs to outperform GNNs in terms of classification accuracy, while also providing superior inference speed and lower memory consumption. Extensive experiments on multiple benchmark datasets demonstrate the efficacy of our approach, positioning it as a compelling alternative to traditional GNN-based methods for node classification.