Abstract
In edge networks, hardware failures and resource pressure may disrupt Service Function Chains (SFCs) deployed on the failed node, making it necessary to efficiently migrate multiple Virtual Network Functions (VNFs) under limited resources. To address these challenges, this paper proposes an offline reinforcement learning-based parallel migration optimization algorithm (RL-PMO) to enable parallel migration of multiple VNFs. The method follows a two-stage framework: in the first stage, improved heuristic algorithms are used to generate high-quality migration trajectories and construct a multi-scenario dataset; in the second stage, the Decision Mamba model is employed to train the policy network. With its selective modeling capability for structured sequences, Decision Mamba can capture the dependencies between VNFs and underlying resources. Combined with a twin-critic architecture and CQL regularization, the model effectively mitigates distribution shift and Q-value overestimation. The simulation results show that RL-PMO maintains approximately a 95% migration success rate across different load conditions and improves performance by about 13% under low and medium loads and up to 17% under high loads compared with typical offline RL algorithms such as IQL. Overall, RL-PMO provides an efficient, reliable, and resource-aware solution for SFC migration in node failure scenarios.