Air quality estimation from sequential surveillance images using a unified CNN-RNN framework

基于统一的 CNN-RNN 框架,从连续监测图像中估计空气质量

阅读:2

Abstract

Air pollution monitoring is essential for urban environmental management. However, traditional approaches, such as ground-based stations and satellite remote sensing, are constrained by high costs, limited spatial or temporal resolution, and poor nighttime applicability. This study develops a unified convolutional-recurrent neural network (CNN-RNN) framework that jointly learns spatial cues and temporal dynamics from surveillance image sequences to estimate the air quality index (AQI) under varying illumination, including night and twilight. Experimented on more than 28,000 hourly images from six sites in southern Kaohsiung, Taiwan, the unified model consistently surpasses single-image baselines across sites and time periods and improves performance in higher pollution categories. The same pipeline extends to PM2.5 and PM10 and adapts to other cities through fine-tuning with few labeled samples. These results indicate that the framework can support round-the-clock, accurate air quality sensing and enable scalable deployment in camera networks to complement conventional monitoring.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。