Abstract
The convergence of Mobile Edge Computing (MEC) and network slicing technologies is critical to meet the diverse quality of service (QoS) requirements of 5G/6G.However,multi-tenancy (eMBB/uRLLC/mMTC) competition for resources in dynamic environments challenges traditional centralised allocation methods.In this paper, we propose a cooperative optimization framework for edge network slicing resources based on the fusion of multi-intelligence reinforcement learning (MARL) and evolutionary game theory (MARL-EGT).The framework models each slice tenant as an intelligent body with autonomous decision-making capability, which explores the optimal resource requesting strategy through interactive learning; meanwhile, evolutionary game dynamics is introduced to model the imitation, learning and evolution process of the slice population strategy, which guides the system to converge to an efficient evolutionary stable equilibrium (ESS).In order to cope with the problem of too large environment state space and intelligence coordination, a hierarchical attention mechanism and a credit-based contribution evaluation algorithm are innovatively designed to significantly improve the learning efficiency and convergence speed. In simulation experiments, under the MEC scenario constructed based on real data, the MARL-EGT scheme significantly outperforms benchmark methods such as federated reinforcement learning (FRL) and non-cooperative gaming (NCG) in terms of key metrics, such as total system utility, slicing SLA satisfaction rate, and resource utilization, and demonstrates superior dynamic environment adaptability, which provides large-scale, intelligent edge network slicing resource management new ideas. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-025-33190-5.