A joint span-entity prediction approach with generative and cross-lingual meta-learning for low-resource Japanese NER

一种结合生成式和跨语言元学习的低资源日语命名实体识别联合跨度实体预测方法

阅读:1

Abstract

Low-resource Japanese few-shot named entity recognition (NER) is hindered by limited annotations, imperfect cross-lingual alignment, and boundary ambiguity. MAML-ProtoNet + + is a hierarchical dynamic meta-learning framework that integrates generative augmentation, cross-lingual contrastive pretraining, fast meta-adaptation, and joint span-entity prediction in a unified training pipeline. Support sets are expanded with pseudo-samples generated by the multilingual model mT5 and filtered through confidence screening, boundary verification, and semantic diversity control to reduce noise while improving coverage. Cross-lingual representations are strengthened by aligning Japanese-English entity pairs from WikiData using an NT-Xent-based contrastive objective, providing complementary alignment signals beyond multilingual pretraining. The meta-learning backbone combines MAML-style rapid adaptation with ProtoNet-style prototype matching, supported by multi-granularity encoding from character-level features, word-level embeddings, and contextual Transformers, while a joint span-type module improves the consistency between boundary detection and type classification. On Japanese few-shot NER, Macro-F1 reaches 0.772 under the 5-shot setting, with boundary accuracies of 0.85 (start) and 0.84 (end). Cross-lingual pretraining increases the cosine similarity of Japanese-English entity pairs from 0.61 to 0.85, and dynamic parameter control maintains F1 above 0.73 on high-complexity tasks, indicating strong robustness and transferability in low-resource Japanese NER.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。