Abstract
Feature selection (FS) is a crucial pre-processing step that is frequently applied in a variety of domains to enhance the efficiency of learning algorithms. Numerous features are utilized in medical data mining to diagnose illnesses; however, many of these features have redundant characteristics and weak connections that are irrelevant, which leads to several issues that negatively impact diagnostic forecasting accuracy. Due to the growing need for techniques that may decrease the multi-dimensionality of data by choosing the optimal selection of components based on predetermined standards required for optimized accuracy of predictions and minimized irrelevant characteristics, work on FS has expanded significantly across several domains. To try and find a solution that is nearly optimal within a limited amount of time, metaheuristics have recently been chosen instead of traditional optimization applications regarding FS problems, as they produce solutions that are appropriate and time-effective. Metaheuristics are optimization methods that are considered algorithms for general use that may be utilized in solving nearly any optimization problem. This is particularly helpful when trying to solve complex problems. By comparing metaheuristics’ success in addressing familiar concerns in comparison to other algorithms, numerous well-known applications have shown the effectiveness of metaheuristics in different manners. The literature contains a significant quantity of metaheuristic methods which include Ant Colony and Particle Swarm Optimizer. Those methods use swarm intelligence principles. This study mainly employs a metaheuristic approach that is an algorithm known as the Heap Based Optimizer (HBO), in various tests to find a solution for FS problems with a higher degree of precision. To improve the selection of features process, Histogram of Oriented Gradients is integrated with a classifier referred to as the K-Nearest Neighbor, using the wrapper approach. The proposed approach’s effectiveness is then assessed opposed to seven established techniques from previous associated studies using nine high-dimensional datasets which contain small sample numbers as well as a large number of classes. In comparison to the other methods, the results show that the HBO achieves good accuracy in two datasets and reduces the amount of characteristics required for classification tasks. In addition, it surpassed the other competing techniques in terms of the speed of convergence. As such, the FS process can be optimized using the suggested HBO approach, regardless of the selection size or classification accuracy.