Motor Imagery EEG Classification Based on Multi-Domain Feature Rotation and Stacking Ensemble

基于多域特征旋转和堆叠集成方法的运动想象脑电图分类

阅读:1

Abstract

BACKGROUND: Decoding motor intentions from electroencephalogram (EEG) signals is a critical component of motor imagery-based brain-computer interface (MI-BCIs). In traditional EEG signal classification, effectively utilizing the valuable information contained within the electroencephalogram is crucial. OBJECTIVES: To further optimize the use of information from various domains, we propose a novel framework based on multi-domain feature rotation transformation and stacking ensemble for classifying MI tasks. METHODS: Initially, we extract the features of Time Domain, Frequency domain, Time-Frequency domain, and Spatial Domain from the EEG signals, and perform feature selection for each domain to identify significant features that possess strong discriminative capacity. Subsequently, local rotation transformations are applied to the significant feature set to generate a rotated feature set, enhancing the representational capacity of the features. Next, the rotated features were fused with the original significant features from each domain to obtain composite features for each domain. Finally, we employ a stacking ensemble approach, where the prediction results of base classifiers corresponding to different domain features and the set of significant features undergo linear discriminant analysis for dimensionality reduction, yielding discriminative feature integration as input for the meta-classifier for classification. RESULTS: The proposed method achieves average classification accuracies of 92.92%, 89.13%, and 86.26% on the BCI Competition III Dataset IVa, BCI Competition IV Dataset I, and BCI Competition IV Dataset 2a, respectively. CONCLUSIONS: Experimental results show that the method proposed in this paper outperforms several existing MI classification methods, such as the Common Time-Frequency-Spatial Patterns and the Selective Extract of the Multi-View Time-Frequency Decomposed Spatial, in terms of classification accuracy and robustness.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。