Opposition-based learning techniques in metaheuristics: classification, comparison, and convergence analysis

元启发式算法中的对立学习技术:分类、比较和收敛性分析

阅读:1

Abstract

In recent years, opposition-based learning (OBL) has emerged as a powerful enhancement strategy in metaheuristic algorithms (MAs), gaining significant attention for its potential to accelerate convergence and improve solution quality. Existing research lacks a structured analysis of how different OBL variants influence optimization performance when integrated into various MAs. This study categorizes and analyzes nine distinct OBL techniques: basic opposition-based learning, quasi-opposition-based learning, generalized opposition-based learning, current optimum opposition-based learning, quasi-reflection opposition-based learning, centroid opposition-based learning, random opposition-based learning, super opposition-based learning, and stochastic opposition-based learning. To systematically assess the effectiveness of these techniques, five widely used OBL variants-basic opposition-based learning, quasi-opposition-based learning, generalized opposition-based learning, current optimum opposition-based learning, quasi-reflection opposition-based learning-were selected for implementation within five well-established MAs: differential evolution, genetic algorithm, particle swarm optimization, artificial bee colony, and harmony search. These hybridized algorithms were evaluated across different integration phases, including the initialization passes and generation updates phase, and in both phases. To experimentally demonstrate the capability of OBL strategies to enhance MAs that face common issues such as slow convergence, limited exploration, and imbalanced exploration-exploitation, we have used 12 benchmark functions from CEC2022 suite. Key performance metrics-including maximum, minimum, mean, standard deviation, and convergence curves-were rigorously analyzed to quantify the improvements introduced by each OBL-enhanced MA. Additionally, a Friedman test was conducted to statistically validate the performance differences among the variants. The results indicate that quasi-reflection opposition-based learning consistently outperforms other OBL variants, demonstrating superior convergence speed and solution quality across most benchmark functions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。