Abstract
Electroencephalography (EEG)-based emotion recognition seeks to enable multidimensional inference of valence, arousal, and dominance (V–A–D) from non-invasive brain signals. However, most existing methods either process each dimension in isolation or adopt single-task pipelines, which underutilize cross-dimensional information and reduce both generalization and physiological interpretability. To overcome these limitations, we propose a multi-task framework with emotion-dimension coupling constraints (MLT-EDCC) that explicitly encodes inter-dimensional priors during end-to-end training. A shared encoder and three task-specific branches are jointly optimized under three complementary constraints: the V–A circular geometric constraint to enforce circumplex structure, the A–D energy alignment constraint to regulate intensity associations, and the V–D correlation constraint to preserve statistical dependencies. This design shifts learning from independent feature extraction to cross-dimensional structure modeling, thereby promoting coherence across valence, arousal, and dominance and enhancing interpretability. Experiments on two benchmark datasets confirm the effectiveness of MLT-EDCC: on DEAP, accuracies reach 97.68%, 97.74%, and 97.41%; on DREAMER, they achieve 96.16%, 95.78%, and 95.96% for valence, arousal, and dominance, respectively. These results demonstrate that embedding psychological and neurophysiological priors as optimizable constraints offers a principled pathway for robust, generalizable, and interpretable multidimensional EEG emotion recognition.