Dynamic workflow scheduling in cloud environments is a challenging task due to task dependencies, fluctuating workloads, resource variability, and the need to balance makespan and energy consumption. This study presents a novel scheduling framework that integrates Graph Neural Networks (GNNs) with Deep Reinforcement Learning (DRL) using the Proximal Policy Optimization (PPO) algorithm to achieve multi-objective optimization, focusing on minimizing makespan and reducing energy consumption. By leveraging GNNs to model task dependencies within workflows, the framework enables adaptive and informed resource allocation. The agent was evaluated within a CloudSim-based simulation environment using synthetic datasets. Experimental results across benchmark datasets demonstrate the proposed framework's effectiveness, achieving consistent improvements in makespan and energy consumption over traditional heuristic methods. The framework achieved a minimum makespan of 689.22 s against the second best of 800.72 s in moderate-sized datasets, reducing makespan significantly with improvements up to 13.92% over baseline methods such as HEFT, Min-Min, and Max-Min, while maintaining competitive energy consumption of 10,964.45 J. These findings highlight the potential of combining GNNs and DRL for dynamic task scheduling in cloud environments, effectively balancing multiple objectives.
Energy-Efficient Dynamic Workflow Scheduling in Cloud Environments Using Deep Learning.
阅读:4
作者:Chandrasiri Sunera, Meedeniya Dulani
| 期刊: | Sensors | 影响因子: | 3.500 |
| 时间: | 2025 | 起止号: | 2025 Feb 26; 25(5):1428 |
| doi: | 10.3390/s25051428 | ||
特别声明
1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。
2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。
3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。
4、投稿及合作请联系:info@biocloudy.com。
