On the effectiveness of limited-data large language model fine-tuning for Arabic

关于有限数据大型语言模型微调对阿拉伯语的有效性

阅读:1

Abstract

This paper presents an investigation into fine-tuning large language models (LLMs) for Arabic natural language processing (NLP) tasks. Although recent multilingual LLMs have made remarkable progress in zero-shot and few-shot settings, specialized models such as fine-tuned BERT variants continue to define state-of-the-art (SOTA) performance in many Arabic tasks. We demonstrate that by fine-tuning a general-purpose LLM (GPT-4o mini) on only a small subset (3.0%-7.5%) of the training samples, we exceed previous best reported results in sentiment analysis (ArSAS) and sarcasm detection (ArSarcasm), while achieving performance statistically comparable to the SOTA in news categorization (ASND). This study highlights that LLMs, when properly adapted, can outperform established models without relying on full-scale annotated training sets. Furthermore, our analysis with the open-source Gemma-3-27B model confirms the generalizability of our data-efficient method. Notably, this approach enabled the model to achieve performance statistically comparable to SOTA on all three tasks, although the proprietary GPT-4o mini maintained an overall performance advantage. We further compare GPT-4o with GPT-4o mini to examine the impact of model size on fine-tuning. GPT-4o outperforms GPT-4o mini across all sample sizes but by small margins (<1%). Notably, GPT-4o fine-tuned on 100 samples achieves marginally better performance than GPT-4o mini fine-tuned on 500 samples, indicating that larger models require fewer labeled examples. Additionally, we find that fine-tuning performance follows predictable scaling, with GPT-4o mini's performance growth function accurately estimating GPT-4o's results (error < 0.005). This enables efficient performance estimation for larger models using smaller ones. Our findings emphasize the practical benefits of fine-tuning LLMs for Arabic NLP, while demonstrating predictable scaling laws that can guide efficient model selection and adaptation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。