Deep reinforcement learning-based mechanism to improve the throughput of EH-WSNs

基于深度强化学习的机制以提高EH-WSN的吞吐量

阅读:2

Abstract

Energy Harvesting Wireless Sensor Networks (EH-WSNs) are widely adopted for their ability to harvest ambient energy. However, these networks face significant challenges due to the limited and continuously varying energy availability at individual nodes, which depends on unpredictable environmental sources. To operate effectively in such conditions, energy fluctuations need to be regulated. This requires continuous monitoring of each node's energy level over time and adaptively adjusting operations. State-of-the-art mechanisms often categorize nodes or discretize energy levels, leading to issues such as the inability to select appropriate actions based on the actual energy states of the nodes. This discretization simplifies the representation of energy states and reduces complexity, making it easier to design and implement. However, it overlooks subtle variations in energy levels, leading to inaccurate assessments and suboptimal performance. To overcome this limitation, this paper proposes an energy-aware transmission method based on the Deep Reinforcement Learning (DRL) algorithm that integrates Q-learning with Deep Neural Networks (DNNs). This method enables each node to adaptively select transmission actions based on its real-time energy state, improving responsiveness to dynamic network conditions. Simulation results show that the proposed method improves throughput by 11.79% compared to traditional methods. These findings demonstrate the effectiveness of DRL-based control in enhancing performance and energy efficiency in EH-WSNs.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。