Attention-enhanced MobileNetV2 models for robust forest fire detection and classification

注意力增强型 MobileNetV2 模型用于稳健的森林火灾检测和分类

阅读:2

Abstract

Early detection of forest fires is essential to limit ecological damage and economic loss. This study evaluates two lightweight convolutional models for binary fire recognition using a balanced dataset of 5121 annotated images spanning diverse environments and illumination conditions. The first model, Att-MobileNetV2, augments MobileNetV2 with a Convolutional Block Attention Module to prioritize informative spatial and channel responses. The second model, MobileNetV2-TL, adopts transfer learning by retaining pre-trained MobileNetV2 weights and training compact task-specific heads. On the held-out test set, Att-MobileNetV2 attains 99.61% accuracy with an F1-score of 99.70%, precision of 99.32%, and recall of 99.19%. MobileNetV2-TL achieves 98.42% accuracy, 98.43% F1-score, 98.42% precision, and 99.47% recall. Ablation results indicate that attention improves discriminability over the MobileNetV2 backbone, and attention heatmaps provide qualitative evidence of focus on flame regions. Comparisons with classical machine-learning pipelines (RFC, SVM) and CNN baselines (e.g., VGG16) under a unified preprocessing and training regimen show consistent improvements. Model size and computational load remain sufficiently low for real-time inference on resource-limited platforms, including UAVs and fixed cameras. The results indicate a favorable balance between accuracy and efficiency and point to practical deployment in continuous fire-monitoring settings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。