One size does not fit all in evaluating model selection scores for image classification

在评估图像分类模型选择得分时,没有一种方法可以适用于所有情况。

阅读:1

Abstract

Selecting pretrained models for image classification often involves computationally intensive finetuning. This study addresses a research gap in the standardized evaluation of transferability scores, which could simplify model selection by ranking pretrained models without exhaustive finetuning. The motivation is to reduce the computational burden of model selection through a consistent approach that guides practitioners in balancing accuracy and efficiency across tasks. This study evaluates 14 transferability scores on 11 benchmark datasets. It includes both Convolutional Neural Network (CNN) and Vision Transformer (ViT) models and ensures consistency in experimental conditions to counter the variability in previous research. Key findings reveal significant variability in score effectiveness based on dataset characteristics (e.g., fine-grained versus coarse-grained classes) and model architectures. ViT models generally show superior transferability, especially for fine-grained datasets. While no single score is best in all cases, some scores excel in specific contexts. In addition to predictive accuracy, the study also evaluates computational efficiency and identifies scores that are suitable for resource-constrained scenarios. This research provides insights for selecting appropriate transferability scores to optimize model selection strategies to facilitate efficient deployment in practice.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。