Robust Learning-Based Detection with Cost Control and Byzantine Mitigation

基于鲁棒学习的检测方法,兼具成本控制和拜占庭缓解策略

阅读:1

Abstract

To address the state estimation and detection problem in the presence of noisy sensor observations, probing costs, and communication noise, we in this paper propose a soft actor-critic (SAC) deep reinforcement learning (DRL) framework for dynamically scheduling sensors and sequentially probing the state of a stochastic system. Moreover, considering Byzantine attacks, we design a generative adversarial network (GAN)-based framework to identify the Byzantine sensors. The GAN-based Byzantine detector and SAC-DRL-based agent are developed to operate in coordination to detect the state of the system reliably and fast while incurring small sensing cost. To evaluate the proposed framework, we measure the performance in terms of detection accuracy, stopping time, and the total probing cost needed for detection. Via simulation results, we analyze the performances and demonstrate that soft actor-critic algorithms are flexible and effective in action selection in imperfectly known environments due to the maximum entropy strategy and they can achieve stable performance levels in challenging test cases (e.g., involving jamming attacks, imperfectly known noise power levels, and high sensing cost scenarios). We also provide comparisons between the performances of the proposed soft actor-critic and conventional actor-critic algorithms as well as fixed scheduling strategies. Finally, we analyze the impact of Byzantine attacks and identify the reliability and accuracy improvements achieved by the GAN-based approach when combined with the SAC-DRL-based decision-making agent.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。