EED-CL: Extended EEG Deformer with Contrastive Learning for Robust Emotion Recognition

EED-CL:基于对比学习的扩展型脑电图形变器,用于鲁棒的情绪识别

阅读:1

Abstract

Emotion recognition based on EEG signals remains a challenging task due to the complex spatiotemporal properties of brain activity and substantial intersubject variability. To address these challenges, we propose the EED-CL framework, which integrates an extended EEG-Deformer (EED) with contrastive learning (CL). The proposed model incorporates a depthwise separable convolution encoder for efficient extraction of spatiotemporal EEG features, a hierarchical coarse-to-medium-to-fine (HCMFT) transformer to capture multiscale temporal patterns, and an attentive dense information purification (ADIP) module to suppress noise and refine essential latent representations. In addition, CL-based pretraining facilitates robust feature learning even in settings with limited labeled data. The extracted multiscale features are integrated and classified through a Transformer encoder and an MLP. Experiments conducted on multiple benchmark EEG datasets show that EED consistently outperforms conventional models, while EED-CL achieves further improvements under label-constrained conditions. Notably, EED-CL demonstrates strong robustness to intersubject variability and noise, enabling stable emotion classification even when labeled samples are scarce. These findings indicate that EED-CL effectively captures multiscale spatiotemporal EEG patterns and offers a scalable and reliable approach for EEG-based emotion recognition.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。