Deep reinforcement learning for scheduling semiconductor cluster tools in varying configurations

基于深度强化学习的半导体集群工具调度方法,适用于各种配置

阅读:3

Abstract

Traditional rule-based cluster tool scheduling in semiconductor manufacturing faces significant limitations, including inflexibility, reliance on domain-specific expertise, and suboptimal performance in dynamic and complex environments. These methods often struggle to adapt to varying process conditions and equipment configurations, which are common in modern fabrications (fabs). Furthermore, previous research has typically relied on simplified simulators of cluster tools, failing to capture the full complexity of real-world semiconductor manufacturing equipment. To address these challenges, this study examines the potential of deep reinforcement learning (DRL) for optimizing cluster tool scheduling. This research presents a comprehensive simulation environment that models a cluster tool system, including both vacuum (VTM) and atmospheric (ATM) robots. This study progressively evaluates DRL agents, starting with a single-agent deep Q-network (DQN) and advancing to a multi-agent DQN (MADQN) framework to schedule the combined VTM-ATM system. Experimental results demonstrate that the proposed DRL agents consistently outperform traditional rule-based methods in terms of productivity and adaptability. In the complex multi-agent environment, the MADQN agent demonstrated robust performance across all tested configurations, achieving a productivity improvement of up to 8.9% over standard rule-based scheduling methods. These findings highlight the potential of DRL to overcome the limitations of existing scheduling methods and significantly enhance productivity in semiconductor manufacturing.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。