Hypergraph-based contrastive embedding and attention fusion for detection of skin cancer

基于超图的对比嵌入和注意力融合技术用于皮肤癌检测

阅读:1

Abstract

Skin diseases involve a spectrum of problems including infections, and malignancies. Melanoma, the deadliest kind of skin cancer, starts in melanocytes, which make melanin. Early detection is really important, but it’s hard since the visual indications are often quite little and there is a big class imbalance in diagnostic datasets. The proposed C2G-HFMTA framework consists of three hierarchical levels: (a) an overall contrastive learning (CL) framework, (b)two major feature learning branches, namely the Graph Contrastive Embedding Framework (GCEF) and the High-dimensional Feature with Multimodal Transformer Attention (HFMTA), and (c) attention and fusion sub-modules including Hypergraph Bi-Convolutional Attention and Multiscale Transformer Attention, which operate within these branches to enhance discriminative representation learning. The proposed method demonstrates strong performance on benchmark dermoscopic datasets and has the potential to support computer-aided diagnosis systems, subject to further may support future computer-aided diagnosis systems validation and real-world testing. We have used Clustered Class-Based Segmentation (CCBS) for changing the training distributions. Our Class-Based Contrastive Loss (CBCL) works directly on original dermoscopic pictures, that preserves the semantic integrity of the images while making it easier to tell the difference between classes. Our framework outperforms several recent CNN- and transformer-based baselines in controlled experimental settings. It gets 93.2% accuracy and a 92.9% F1-score, and it does well on minority classes. Experiments were conducted on the HAM10000 dataset containing 10,015 dermoscopic images across seven diagnostic categories, using a stratified train–validation–test split of 70%–10%–20%. Performance was evaluated using accuracy, precision, recall, and F1-score, using five-fold stratified cross-validation to ensure robust performance estimation. Ablation experiments show that grouping, cross-branch fusion, and semantic-guided attention are important.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。