AMANet: a data-augmented multi-scale temporal attention convolutional network for motor imagery classification

AMANet:一种用于运动想象分类的数据增强型多尺度时间注意力卷积网络

阅读:1

Abstract

Motor imagery brain-computer interface (MI-BCI) has garnered considerable attention due to its potential for neural plasticity. However, the limited number of MI-EEG samples per subject and the susceptibility of features to noise and artifacts posed significant challenges for achieving high decoding performance. To address this problem, a Data-Augmented Multi-Scale Temporal Attention Convolutional Network (AMANet) was proposed. The network mainly consisted of four modules. First, the data augmentation module comprises three steps: sliding-window segmentation to increase sample size, Common Spatial Pattern (CSP) extraction of discriminative spatial features, and linear scaling to enhance network robustness. Then, multi-scale temporal convolution was incorporated to dynamically extract temporal and spatial features. Subsequently, the ECA attention mechanism was integrated to realize the adaptive adjustment of the weights of different channels. Finally, depthwise separable convolution was utilized to fully integrate and classify the deep extraction of temporal and spatial features. In 10-fold cross-validation, the results show that AMANet achieves classification accuracies of 84.06 and 85.09% on the BCI Competition IV Datasets 2a and 2b, respectively, significantly outperforming baseline models such as Incep-EEGNet. On the High-Gamma dataset, AMANet attains a classification accuracy of 95.48%. These results demonstrate the excellent performance of AMANet in motor imagery decoding tasks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。