Abstract
Deep learning-based environmental microorganism recognition in a dynamic world demands models capable of recognizing novel classes in new tasks. However, it is hindered by data scarcity, high annotation costs, and the plasticity-stability dilemma. Few-Shot Class-Incremental Learning (FSCIL) aims to address these challenges, yet a dedicated benchmark for environmental microorganism recognition remains absent. To bridge this gap, we establish the first FSCIL benchmark for environmental microorganism recognition and propose a unified evaluation protocol on the EMDS-7 dataset. We systematically reproduce 10 representative FSCIL methods: CEC, FACT, SAVC, PFR, ADBS, Comp, TEEN, Limit, BiDist, and CLOSER, and conduct comprehensive comparative experiments under a consistent implementation setting. We report multidimensional evaluation metrics, including per-session accuracy, average accuracy across sessions, and performance drop rate to quantify long-term performance degradation, along with thorough performance analyses. Our results reveal that SAVC and FACT achieve the highest overall accuracy, while PFR demonstrates more stable performance at the cost of a lower accuracy ceiling. In contrast, CLOSER and BiDist exhibit substantially weaker performance. Overall, our benchmark reveals that FSCIL methods effective on generic image benchmarks do not directly transfer to environmental microorganism recognition tasks, necessitating task-specific adaptations. This work provides a reproducible foundational platform to enable fair comparisons and accelerate future research on FSCIL for environmental microorganism recognition.