Abstract
Neural machine translation (NMT) for low-resource languages such as Assamese and Bodo has seen dramatic quality improvements with large multilingual models like Multilingual Bidirectional and Auto-Regressive Transformer (mBART50) and IndicTrans2 multilingual Transformer model, but their parameter counts (often [Formula: see text] billion) make real-time, on-device deployment infeasible. Although Assamese and Bodo are not among mBART50's pretraining languages, we first fine-tuned mBART50 on the AI4Bharat Samanantar Assamese-English and IndicTrans2-derived Bodo-English corpora to enable cross-lingual adaptation from related Indo-Aryan and Tibeto-Burman languages. We propose a novel two-stage approach that combines sparse Mixture-of-Experts (MoE) architectures with cross-lingual knowledge distillation to yield a 400-million-parameter student model that retains translation quality within approximately one Bilingual Evaluation Understudy (BLEU) point of its 1.3-billion-parameter teacher while reducing active computation per token by approximately four-fold. Our student uses a twelve-layer Transformer encoder-decoder: the first half of encoder and decoder layers remain standard, while the latter half incorporate sparsely activated Mixture-of-Experts (MoE) feed-forward blocks (four experts in the encoder with top-two gating; two experts in the decoder with top-one gating) and learnable language-prefix embeddings. We perform cross-lingual knowledge distillation, transferring both hard and soft labels from the fine-tuned mBART50 teacher on the AI4Bharat Samanantar Assamese-English corpus and IndicTrans2-derived Bodo-English data, with evaluation on the FLORES-200 multilingual benchmark. On a 10,000-sentence test set, our student achieves 34.5 BLEU compared with 35.2 BLEU for the teacher in Assamese-English, and 31.2 compared with 32.0 in Bodo-English, while running inference at approximately 24 ms per sentence on an RTX 3050 laptop GPU-about 280% faster than the dense teacher. To our knowledge, this is the first demonstration of cross-lingual MoE-based distillation for Indic NMT, enabling efficient, high-quality translation at the edge.