Multi-task meta-initialized DQN for fast adaptation to unseen slicing tasks in O-RAN

针对 O-RAN 中未见过的切片任务,提出了一种多任务元初始化的 DQN,用于快速适应。

阅读:1

Abstract

The open radio access network (O-RAN) architecture facilitates intelligent radio resource management via RAN intelligent controllers (RICs). Deep reinforcement learning (DRL) algorithms are integrated into RICs to address dynamic O-RAN slicing challenges. However, DRL-based O-RAN slicing suffers from instability and performance degradation when deployed on unseen tasks. We propose M2DQN, a hybrid framework that combines multi-task learning (MTL) and meta-learning to optimize DQN initialization parameters for rapid adaptation. Our method decouples the DQN into two components: shared layers trained via MTL to capture cross-task representations, and task-specific layers optimized through meta-learning for efficient fine-tuning. Experiments in an open-source network slicing environment demonstrate that M2DQN outperforms MTL, meta-learning, and policy reuse baselines, achieving improved initial performance across 91 unseen tasks. This demonstrates an effective balance between transferability and adaptability. Code is available at: https://github.com/bszeng/M2DQN.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。