Proximal Policy Optimization-based Task Offloading Framework for Smart Disaster Monitoring using UAV-assisted WSNs

基于近端策略优化的无人机辅助无线传感器网络智能灾害监测任务卸载框架

阅读:2

Abstract

Unmanned Aerial Vehicles (UAVs) are increasingly employed in Wireless Sensor Networks (WSNs) to enhance communication, coverage, and energy efficiency, particularly in disaster monitoring and remote surveillance scenarios. However, challenges such as limited energy resources, dynamic task allocation, and UAV trajectory optimization remain critical. This paper presents Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs (ETORL-UAV), a novel framework that integrates Proximal Policy Optimization (PPO) based reinforcement learning to intelligently manage UAV-assisted operations in edge-enabled WSNs. The proposed approach utilizes a multi-objective reward model to adaptively balance energy consumption, task success rate, and network lifetime. Extensive simulation results demonstrate that ETORL-UAV outperforms five state-of-the-art methods Meta-RL, g-MAPPO, Backscatter Optimization, Hierarchical Optimization, and Game Theory based Pricing achieving up to 9.3 % higher task offloading success, 18.75 % improvement in network lifetime, and 27 % reduction in energy consumption. These results validate the framework's scalability, reliability, and practical applicability for real-world disaster-response WSN deployments.•Proposes ETORL-UAV: Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs•Leverages PPO-based reinforcement learning and a multi-objective reward model•Demonstrates superior performance over five benchmark approaches in disaster-response simulations.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。