Deep Reinforcement Learning for Joint Trajectory Planning, Transmission Scheduling, and Access Control in UAV-Assisted Wireless Sensor Networks

基于深度强化学习的无人机辅助无线传感器网络联合轨迹规划、传输调度和访问控制

阅读:1

Abstract

Unmanned aerial vehicles (UAVs) can be used to relay sensing information and computational workloads from ground users (GUs) to a remote base station (RBS) for further processing. In this paper, we employ multiple UAVs to assist with the collection of sensing information in a terrestrial wireless sensor network. All of the information collected by the UAVs can be forwarded to the RBS. We aim to improve the energy efficiency for sensing-data collection and transmission by optimizing UAV trajectory, scheduling, and access-control strategies. Considering a time-slotted frame structure, UAV flight, sensing, and information-forwarding sub-slots are confined to each time slot. This motivates the trade-off study between UAV access-control and trajectory planning. More sensing data in one time slot will take up more UAV buffer space and require a longer transmission time for information forwarding. We solve this problem by a multi-agent deep reinforcement learning approach that takes into consideration a dynamic network environment with uncertain information about the GU spatial distribution and traffic demands. We further devise a hierarchical learning framework with reduced action and state spaces to improve the learning efficiency by exploiting the distributed structure of the UAV-assisted wireless sensor network. Simulation results show that UAV trajectory planning with access control can significantly improve UAV energy efficiency. The hierarchical learning method is more stable in learning and can also achieve higher sensing performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。