Abstract
Low-resource Japanese few-shot named entity recognition (NER) is hindered by limited annotations, imperfect cross-lingual alignment, and boundary ambiguity. MAML-ProtoNet + + is a hierarchical dynamic meta-learning framework that integrates generative augmentation, cross-lingual contrastive pretraining, fast meta-adaptation, and joint span-entity prediction in a unified training pipeline. Support sets are expanded with pseudo-samples generated by the multilingual model mT5 and filtered through confidence screening, boundary verification, and semantic diversity control to reduce noise while improving coverage. Cross-lingual representations are strengthened by aligning Japanese-English entity pairs from WikiData using an NT-Xent-based contrastive objective, providing complementary alignment signals beyond multilingual pretraining. The meta-learning backbone combines MAML-style rapid adaptation with ProtoNet-style prototype matching, supported by multi-granularity encoding from character-level features, word-level embeddings, and contextual Transformers, while a joint span-type module improves the consistency between boundary detection and type classification. On Japanese few-shot NER, Macro-F1 reaches 0.772 under the 5-shot setting, with boundary accuracies of 0.85 (start) and 0.84 (end). Cross-lingual pretraining increases the cosine similarity of Japanese-English entity pairs from 0.61 to 0.85, and dynamic parameter control maintains F1 above 0.73 on high-complexity tasks, indicating strong robustness and transferability in low-resource Japanese NER.