Abstract
High-accuracy fish-species identification is a key prerequisite for adaptive, disease-reducing precision feeding in automated polyculture systems. However, severe underwater degradation-light fluctuation, turbidity, occlusion, and species similarity-cripples biomass and fish-count accuracy, while conventional CNN-based methods lack biological priors to recover lost semantic cues. To overcome these limitations, this study proposes a knowledge-augmented framework that integrates a Fish Multimodal Knowledge Graph (FM-KG) with deep visual recognition. Unlike existing approaches that rely solely on pixel-level restoration or visual features, the proposed FM-KG fuses multi-source biological and environmental information to encode species-specific semantics. Its semantic embeddings drive a Semantically-Guided Denoising Module (SGDM) that restores degraded images by emphasizing biologically meaningful structures, while a Knowledge-Driven Attention Dynamic Modulation Layer (K-ADML) adaptively re-weights spatial and channel attention according to inter-species relations within the knowledge graph. A downstream classifier then performs fine-grained species recognition. Experiments on aquaculture datasets demonstrate that the proposed framework consistently outperforms state-of-the-art underwater image enhancers and recognizers, particularly under low signal-to-noise and severe blur conditions. This work establishes a semantically grounded, knowledge-enhanced paradigm for mitigating information loss in aquatic vision, providing a foundation for robust and intelligent aquaculture automation.