Abstract
Brain-computer interfaces (BCIs) have garnered significant interest due to their potential to enable communication and control for individuals with limited or no ability to interact with technologies in a conventional way. By applying electrical signals generated by brain cells, BCIs eliminate the need for physical interaction with external devices. This study investigates the performance of traditional classifiers-specifically, linear discriminant analysis (LDA) and support vector machines (SVMs)-in comparison with a hybrid neural network model for EEG-based gesture classification. The dataset comprised EEG recordings of seven distinct gestures performed by 33 participants. Binary classification tasks were conducted using both raw windowed EEG signals and features extracted via bandpower and the empirical wavelet transform (EWT). The hybrid neural network architecture demonstrated higher classification accuracy compared to the standard classifiers. These findings suggest that combining featuring extraction with deep learning models offers a promising approach for improving EEG gesture recognition in BCI systems.