Causal deep learning for enhancing explainability in 6G network edge intelligence anomaly detection

利用因果深度学习增强6G网络边缘智能异常检测的可解释性

阅读:1

Abstract

With the rapid development of 6G networks, anomaly detection in network edge intelligence faces significant challenges in system interpretability and trustworthiness. Although machine learning-based methods improve detection performance, their black-box nature limits reliable cybersecurity decision support. To address this, we propose a novel framework integrating causal inference with LSTM networks. Our approach first applies Random Fourier Feature transformation to eliminate nonlinear feature correlations-a prerequisite for valid causal analysis. We then quantify feature-specific causal effects using sample-weighted adjustments to ensure model stability. Furthermore, Generative Adversarial Networks generate high-quality minority-class samples to augment training data, enhancing anomaly detection accuracy. Experimental validation on two large-scale datasets demonstrates a 33.7% improvement in explainability and a 68% reduction in root-cause localization time. This work establishes a new cybersecurity paradigm for 6G edge intelligence through causal reasoning.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。