Abstract
Histopathological images, characterized by their high resolution and intricate cellular structures, present unique challenges for automated analysis. Traditional supervised learning-based methods often rely on extensive labeled datasets, which are labour-intensive and expensive. In learning representations, self-supervised learning techniques have shown promising outcomes directly from raw image data without manual annotations. In this paper, we propose a novel margin-aware optimized contrastive learning approach to enhance representation learning from histopathological images using a self-supervised approach. The proposed approach optimizes contrastive learning with a margin-based strategy to effectively learn discriminative representations while enforcing a semantic similarity threshold. In the proposed loss function, a margin is used to enforce a certain level of similarity between positive pairs in the embedding space, and a scaling factor is introduced to adjust the sensitivity of the loss, thereby enhancing the discriminative capacity of the learned representations. Our approach demonstrates robust generalization in in- and out-domain settings through comprehensive experimental evaluations conducted on five distinct benchmark histopathological datasets belonging to three cancer types. The results obtained on different experimental settings show that the proposed approach outmatched the state-of-the-art approaches in cross-domain and cross-disease settings.