Multi-branch GAT-GRU-transformer for explainable EEG-based finger motor imagery classification

用于可解释的基于脑电图的手指运动想象分类的多分支GAT-GRU变换器

阅读:1

Abstract

Electroencephalography (EEG) provides a non-invasive and real-time approach to decoding motor imagery (MI) tasks, such as finger movements, offering significant potential for brain-computer interface (BCI) applications. However, due to the complex, noisy, and non-stationary nature of EEG signals, traditional classification methods-such as Common Spatial Pattern (CSP) and Power Spectral Density (PSD)-struggle to extract meaningful, multidimensional features. While deep learning models like CNNs and RNNs have shown promise, they often focus on single-dimensional aspects and lack interpretability, limiting their neuroscientific relevance. This study proposes a novel multi-branch deep learning framework, termed Multi-Branch GAT-GRU-Transformer, to enhance EEG-based MI classification. The model consists of parallel branches to extract spatial, temporal, and frequency features: a Graph Attention Network (GAT) models spatial relationships among EEG channels, a hybrid Gated Recurrent Unit (GRU) and Transformer module captures temporal dependencies, and one-dimensional CNNs extract frequency-specific information. Feature fusion is employed to integrate these heterogeneous representations. To improve interpretability, the model incorporates SHAP (SHapley Additive exPlanations) and Phase Locking Value (PLV) analyses. Notably, PLV is also used to construct the GAT adjacency matrix, embedding biologically-informed spatial priors into the learning process. The proposed model was evaluated on the Kaya dataset, achieving a five-class MI classification accuracy of 55.76%. Ablation studies confirmed the effectiveness of each architectural component. Furthermore, SHAP and PLV analyses identified C3 and C4 as critical EEG channels and highlighted the Beta frequency band as highly relevant, aligning with known motor-related neural activity. The Multi-Branch GAT-GRU-Transformer effectively addresses key challenges in EEG-based MI classification by integrating domain-relevant spatial, temporal, and frequency features, while enhancing model interpretability through biologically grounded mechanisms. This work not only improves classification performance but also provides a transparent framework for neuroscientific investigation, with broad implications for BCI development and cognitive neuroscience research.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。