Cross-language dissemination of Chinese classical literature using multimodal deep learning and artificial intelligence.

阅读:3
作者:Bai Yulan, Lei Songhua
Against the backdrop of rapid advancements in artificial intelligence (AI), multimodal deep learning (DL) technologies offer new possibilities for cross-language translation. This work proposes a multimodal DL-based translation model, the Transformer-Multimodal Neural Machine Translation (TMNMT), to promote the cross-language dissemination and comprehension of Chinese classical literature. The proposed model innovatively integrates visual features generated by conditional diffusion models and leverages knowledge distillation techniques to achieve efficient transfer learning, fully exploiting the latent information in multilingual corpora. The work designs a gated neural unit-based multimodal feature fusion mechanism and a decoder-based visual feature attention module to enhance translation performance, thus dynamically combining textual and visual information. Experimental results demonstrate that TMNMT significantly outperforms baseline models in multimodal and text-only translation tasks. It achieves a BLEU score of 39.2 on the Chinese literature dataset, a minimum improvement of 1.55% over other models, and a METEOR score of 64.8, with a minimum improvement of 8.14%. Moreover, incorporating the decoder's visual module notably boosts performance, with BLEU and METEOR scores on the En-Ge Test2017 task improving by 2.55% and 2.33%, respectively. This work provides technical support for the multilingual dissemination of Chinese classical literature and broadens the application prospects of AI in cultural domains.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。