ImageNet pre-training and two-step transfer learning in chromosome image classification

ImageNet预训练和两步迁移学习在染色体图像分类中的应用

阅读:2

Abstract

Chromosome image classification typically relies on ImageNet pre-training, yet the potential of leveraging intermediate domains from related staining techniques remains largely underexplored. Here, we evaluate two-step transfer learning-where classifiers are first fine-tuned on an intermediate domain before targeting the final classification task-across Q-band (BioImLab dataset) and G-band (CIR dataset) chromosome classification. Each dataset serves as intermediate domain for the other. Across 11 architecture families and three training approaches, models achieve improvements when domain similarity is high and data quality is limited: modern architectures (ConvNeXt, Swin Transformer, ViT, MobileNetV3) show + 0.8 to + 3.3 percentage point gains in Macro-F1 on Q-band classification, while traditional CNNs benefit less or show no improvement. On the higher-quality G-band dataset, all architectures approach performance saturation, with minimal gains from two-step transfer (+ 0.1 to + 0.7 percentage points). Consistent results across both transfer directions demonstrate that, with appropriate architecture selection and intermediate domain similarity, two-step transfer learning can boost performance when target datasets are challenging, while ImageNet pre-training alone suffices for high-quality data. The code is publicly available at https://github.com/MuscleOne/chromosome_TL .

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。