Abstract
Automated movement intention is crucial for brain-computer interface (BCI) applications. The automatic identification of movement intention can assist patients with movement problems in regaining their mobility. This study introduces a novel approach for the automatic identification of movement intention through finger tapping. This work has compiled a database of EEG signals derived from left finger taps, right finger taps, and a resting condition. Following the requisite pre-processing, the captured signals are input into the proposed model, which is constructed based on graph theory and deep convolutional networks. In this study, we introduce a novel architecture based on six deep convolutional graph layers, specifically designed to effectively capture and extract essential features from EEG signals. The proposed model demonstrates a remarkable performance, achieving an accuracy of 98% in a binary classification task when distinguishing between left and right finger tapping. Furthermore, in a more complex three-class classification scenario, which includes left finger tapping, right finger tapping, and an additional class, the model attains an accuracy of 92%. These results highlight the effectiveness of the architecture in decoding motor-related brain activity from EEG data. Furthermore, relative to recent studies, the suggested model exhibits significant resilience in noisy situations, making it suitable for online BCI applications.