Abstract
This paper presents an investigation into fine-tuning large language models (LLMs) for Arabic natural language processing (NLP) tasks. Although recent multilingual LLMs have made remarkable progress in zero-shot and few-shot settings, specialized models such as fine-tuned BERT variants continue to define state-of-the-art (SOTA) performance in many Arabic tasks. We demonstrate that by fine-tuning a general-purpose LLM (GPT-4o mini) on only a small subset (3.0%-7.5%) of the training samples, we exceed previous best reported results in sentiment analysis (ArSAS) and sarcasm detection (ArSarcasm), while achieving performance statistically comparable to the SOTA in news categorization (ASND). This study highlights that LLMs, when properly adapted, can outperform established models without relying on full-scale annotated training sets. Furthermore, our analysis with the open-source Gemma-3-27B model confirms the generalizability of our data-efficient method. Notably, this approach enabled the model to achieve performance statistically comparable to SOTA on all three tasks, although the proprietary GPT-4o mini maintained an overall performance advantage. We further compare GPT-4o with GPT-4o mini to examine the impact of model size on fine-tuning. GPT-4o outperforms GPT-4o mini across all sample sizes but by small margins (<1%). Notably, GPT-4o fine-tuned on 100 samples achieves marginally better performance than GPT-4o mini fine-tuned on 500 samples, indicating that larger models require fewer labeled examples. Additionally, we find that fine-tuning performance follows predictable scaling, with GPT-4o mini's performance growth function accurately estimating GPT-4o's results (error < 0.005). This enables efficient performance estimation for larger models using smaller ones. Our findings emphasize the practical benefits of fine-tuning LLMs for Arabic NLP, while demonstrating predictable scaling laws that can guide efficient model selection and adaptation.