Abstract
With the development of emotion recognition technology in various applications, studies based on EEG signals were carried out as they can directly reflect brain activity. Although existing graph neural network (GNN) methods have made some progress in processing EEG signals, they still face significant limitations in capturing complex spatiotemporal dependencies, avoiding over-smoothing, and handling cross-regional brain signal interactions, which impact the accuracy and robustness of emotion recognition. To address these problems, this paper proposes a Hierarchical Multi-Scale Graph Neural Network (HMSGNN). This method enhances the spatiotemporal feature modeling ability of EEG signals by extracting features at multiple levels, from local to global, thus improving the accuracy and robustness of emotion recognition. Experimental results show that HMSGNN achieves recognition accuracies of 98.67% and 85.72% in subject-dependent experiments on the SEED and SEED-IV datasets, and 87.11% and 76.14% in subject-independent experiments, respectively. Under the reproduced experimental settings, these values are the highest among the compared methods, while maintaining comparable or lower variance.