Improved double DQN with deep reinforcement learning for UAV indoor autonomous obstacle avoidance

改进的双DQN网络结合深度强化学习,用于无人机室内自主避障

阅读:1

Abstract

Aiming at the problems of insufficient autonomous obstacle avoidance performance of UAVs in complex indoor environments, an improved Double DQN algorithm based on deep reinforcement learning is proposed. The algorithm enhances the perception and learning capabilities by optimizing the network model and employs a dynamic exploration strategy that encourages exploration in the early stage and reduces it later to accelerate convergence and improve efficiency. Simulation experiments in two scenarios of varying complexity, using an indoor simulation environment built with AirSim and UE4(Unreal Engine 4), show that in the simpler scenario, the average cumulative reward increased by 22.88%, the maximum reward increased by 101.56%, the average safe flight distance increased by 23.17%, and the maximum safe flight distance by 105.62%. In the more complex scenario, the average cumulative reward increased by 2.66%, the maximum reward increased by 88.77%, the average safe flight distance increased by 2.05%, and the maximum safe flight distance by 84.68%.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。