An Enhanced Valence-Arousal Multimodal Emotion Dataset for Emotion Recognition

用于情绪识别的增强型效价-唤醒多模态情绪数据集

阅读:1

Abstract

We introduce a novel multimodal emotion recognition dataset designed to enhance the precision of valence-arousal modeling while incorporating individual differences. This dataset includes electroencephalogram (EEG), electrocardiogram (ECG), and pulse interval (PI) data from 64 participants. Data collection employed two emotion induction paradigms: video stimuli targeting different valence levels (positive, neutral, and negative) and the Mannheim Multicomponent Stress Test (MMST) inducing high arousal through cognitive, emotional, and social stressors. To enrich the dataset, participants' personality traits, anxiety, depression, and emotional states were assessed using validated questionnaires. By capturing a broad spectrum of affective responses and systematically accounting for individual differences, this dataset provides a robust resource for precise emotion modeling. The integration of multimodal physiological data with psychological assessments lays a strong foundation for personalized emotion recognition. We anticipate this resource will support the development of more accurate, adaptive, and individualized emotion recognition systems across diverse applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。