Cross-lingual sparse-MoE distillation for efficient low-resource assamese-english and bodo-english translation

用于高效低资源阿萨姆语-英语和博多语-英语翻译的跨语言稀疏MoE蒸馏

阅读:1

Abstract

Neural machine translation (NMT) for low-resource languages such as Assamese and Bodo has seen dramatic quality improvements with large multilingual models like Multilingual Bidirectional and Auto-Regressive Transformer (mBART50) and IndicTrans2 multilingual Transformer model, but their parameter counts (often [Formula: see text] billion) make real-time, on-device deployment infeasible. Although Assamese and Bodo are not among mBART50's pretraining languages, we first fine-tuned mBART50 on the AI4Bharat Samanantar Assamese-English and IndicTrans2-derived Bodo-English corpora to enable cross-lingual adaptation from related Indo-Aryan and Tibeto-Burman languages. We propose a novel two-stage approach that combines sparse Mixture-of-Experts (MoE) architectures with cross-lingual knowledge distillation to yield a 400-million-parameter student model that retains translation quality within approximately one Bilingual Evaluation Understudy (BLEU) point of its 1.3-billion-parameter teacher while reducing active computation per token by approximately four-fold. Our student uses a twelve-layer Transformer encoder-decoder: the first half of encoder and decoder layers remain standard, while the latter half incorporate sparsely activated Mixture-of-Experts (MoE) feed-forward blocks (four experts in the encoder with top-two gating; two experts in the decoder with top-one gating) and learnable language-prefix embeddings. We perform cross-lingual knowledge distillation, transferring both hard and soft labels from the fine-tuned mBART50 teacher on the AI4Bharat Samanantar Assamese-English corpus and IndicTrans2-derived Bodo-English data, with evaluation on the FLORES-200 multilingual benchmark. On a 10,000-sentence test set, our student achieves 34.5 BLEU compared with 35.2 BLEU for the teacher in Assamese-English, and 31.2 compared with 32.0 in Bodo-English, while running inference at approximately 24 ms per sentence on an RTX 3050 laptop GPU-about 280% faster than the dense teacher. To our knowledge, this is the first demonstration of cross-lingual MoE-based distillation for Indic NMT, enabling efficient, high-quality translation at the edge.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。