Abstract
Named entity recognition (NER), a core task in natural language processing (NLP), remains constrained by heavy reliance on annotated data, limited cross domain generalization, and difficulty in recognizing name entities out of vocabulary entities. In specialized domains such as analysis of Global Navigation Satellite System (GNSS) countermeasures, including anti-jamming and anti-spoofing, where datasets are small and domain knowledge is scarce, existing models exhibit marked performance degradation. To address these challenges, we propose HybridNER, a framework that integrates locally trained span-based models with large language models (LLMs). The approach employs a span prediction metasystem that first fuses outputs from multiple base learners by computing span to label compatibility scores and assigns an uncertainty estimate to each candidate entity. Entities with uncertainty above a preset threshold are then routed to an LLM for a second stage classification, and the final decision integrates both sources to realize complementary strengths. Experiments on multiple general purpose and domain specific datasets show that HybridNER achieves higher precision, recall, and F1 than traditional ensemble methods such as majority voting and weighted voting, with especially pronounced gains in specialized domains, thereby improving the robustness and generalization of NER.