Edge-Deployable Fish Feeding-State Quantification and Recognition via Frame-Pair Motion Encoding and EfficientFeedingNet

基于帧对运动编码和高效摄食网络的边缘部署鱼类摄食状态量化与识别

阅读:2

Abstract

Accurate feeding-state monitoring is essential for improving feeding management, reducing feed waste, and supporting water quality and fish welfare in aquaculture. However, existing vision-based methods often rely on subjective labels or computationally expensive temporal models, which limits practical on-farm deployment. Here, we propose an objective, edge-deployable framework for motion-driven feeding-state quantification and binary feeding/non-feeding recognition from top-view videos. The framework integrates frame-pair dense optical-flow encoding with a lightweight network (EfficientFeedingNet) to enable real-time deployment. Using an optical-flow-derived motion-intensity signal (V-Value), we automatically delineate feeding-response intervals and construct a perception-based dataset (Perceptual Dataset) with reproducible binary labels, alongside an observer-labeled Intuitive Dataset. Across representative backbones, models trained on the Perceptual Dataset achieve >90% test accuracy and improve over the Intuitive Dataset by 13.13-18.46 percentage points. The proposed EfficientFeedingNet attains 96.53% test accuracy while remaining lightweight for edge deployment; on a Jetson Orin NX, it runs at 7.0 ms per image (143.24 fps). Overall, the proposed framework provides a practical basis for timely, data-driven feeding decisions in precision aquaculture.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。