Abstract
The exponential growth of multimedia content in the digital age has necessitated the development of advanced cross-lingual systems capable of understanding and interpreting visual information across different languages. However, current efforts have predominantly been focused on monolingual tasks, leaving a substantial gap in cross-lingual multimedia analysis, particularly for non-English languages. To address this gap, AraTraditions10k, a comprehensive and culturally rich dataset, has been introduced to enhance cross-lingual image annotation, retrieval, and tagging, with a specific focus on Arabic and English languages. The dataset consists of 10,000 carefully curated images representing diverse aspects of Arabic culture, each annotated with five captions in Modern Standard Arabic (MSA) and professionally translated into English. To maximize the utility of the dataset, advanced machine learning models, including a Multi-Layer Perceptron (MLP) for tag recommendation and an enhanced Word2VisualVec (W2VV) model for sentence recommendation, have been developed. These models have been augmented with attention mechanisms and contrastive loss functions, resulting in measurable performance improvements. Notably, the tag recommendation system achieved an overall top-1 accuracy of 93%, while the sentence recommendation system for the English language attained BLEU-4, METEOR, ROUGE-L, CIDEr, and SPICE scores of 78.2, 68.3, 75.8, 136.7, and 52.0, respectively. By addressing the linguistic and cultural gaps in existing datasets, AraTraditions10k establishes a new benchmark for the quality and inclusivity of multilingual datasets, contributing to the broader field of cross-lingual multimedia analysis and facilitating the development of more accessible and culturally sensitive multimedia technologies.