JDA-RSDB: a multimodal domain adaptation method for cross-session emotion recognition from EEG and eye movement signals

JDA-RSDB:一种基于脑电图和眼动信号的跨会话情绪识别多模态域自适应方法

阅读:1

Abstract

Multimodal emotion recognition has shown growing interest in affective computing, as combining Electroencephalogram (EEG) and eye movement (EM) signals enables the capture of complex emotional processes. However, EEG and EM signals are exposed to joint distribution differences across different days and recorded sessions, reducing the recognition performance. Currently, domain adaptation has been developed to address such distribution differences. Unfortunately, existing domain adaptation solutions still show suboptimal classification results, since ambiguous and non-discriminative decision boundaries are still learned during distribution matching. This paper presents Joint Distribution Alignment with Refined and Separable Decision Boundaries (JDA-RSDB), a multimodal domain adaptation method for cross-session emotion recognition from EEG and EM signals. Our proposed method assumes that a more discriminative feature representation must be ensured on new sessions during joint distribution matching. For this, JDA-RSDB produces similar marginal and conditional distributions between domains, first aligning feature statistics at modality and domain levels, and then, motivating consistent similarity between fused samples from different domains that produce the same class prediction. Simultaneously, this similarity is enhanced by learning a separable feature space on target data, placing decision boundaries on low-density regions. More importantly, decision boundaries are refined by achieving an agreement between target predictions from a principal classifier and those from an auxiliary classifier. Experiments were conducted on three public datasets, SEED-GER, SEED-IV, and SEED-V, in a cross-session setting. The proposed framework achieves an average accuracy of 83.33%, 80.89%, and 75.17% across the three available sessions on SEED-GER, SEED-IV, and SEED-V, outperforming state-of-the-art solutions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。