Comparative analysis of supervised and self-supervised learning with small and imbalanced medical imaging datasets

对小型且不平衡的医学影像数据集进行监督学习和自监督学习的比较分析

阅读:1

Abstract

Self-supervised learning (SSL) in computer vision has shown its potential to reduce reliance on labeled data. However, most studies focused on balanced, large, broad-domain datasets like ImageNet, whereas, in real-world medical applications, dataset size is typically limited. This study compares the performance of SSL versus supervised learning (SL) on small, imbalanced medical imaging datasets. We experimented with four binary classification tasks: age prediction and diagnosis of Alzheimer's disease from brain magnetic resonance imaging scans, pneumonia from chest radiograms, and retinal diseases associated with choroidal neovascularization from optical coherence tomography with a mean size of training sets of 843 images, 771 images, 1,214 images, and 33,484 images, respectively. We tested various combinations of label availability and class frequency distribution, repeating the training with different random seeds to assess result uncertainty. In most experiments involving small training sets, SL outperformed the selected SSL paradigms, even when a limited portion of labeled data was available. Our findings highlight the importance of carefully selecting learning paradigms based on specific application requirements, which are influenced by factors such as training set size, label availability, and class frequency distribution.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。