Towards intelligent edge computing through reinforcement learning based offloading in public edge as a service

通过基于强化学习的公共边缘即服务卸载,实现智能边缘计算

阅读:1

Abstract

Internet of Things (IoT) deployments face increasing challenges in meeting strict latency and cost requirements while ensuring efficient resource utilization in distributed environments. Traditional offloading often overlooks the role of intermediate regional layers and mobility, resulting in inefficiencies in real-world deployments. To address this gap, we propose Public Edge as a Service (PEaaS) as an intermediate tier and develop RegionalEdgeSimPy, a Python simulator to model and evaluate this framework. It uses a Proximal Policy Optimization (PPO) scheduler that models mobility and considers multiple input parameters (e.g., network latency, cost, congestion, and energy). Tasks are first evaluated at the serving (Wireless Access Point (WAP)) for feasibility under utilization thresholds. This decision uses action masking to restrict invalid options, and a reward function that integrates latency, cost, congestion, and energy to guide optimal offloading. Simulations conducted with 10 to 3000 devices in a 10 × 10 Kilometers smart city area. Results show that PPo prioritizes Edge processing until over-utilization, after which workloads are offloaded to the nearest PEaaS, with Cloud used sparingly. On average, Edge achieves 75.8% utilization, PEaaS stabilizes near 52.9%, and Cloud remains under 1.2% when active. These findings demonstrate that the PPO scheduling significantly reduces delay, cost, and task failures, providing improved scalability for mobility in IoT big data processing.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。