Abstract
Electroencephalography (EEG) can objectively reflect an individual's emotional state. However, due to significant inter-subject differences, existing methods exhibit low generalization performance in emotion recognition across different individuals. Therefore, an EEG emotion classification framework based on deep feature aggregation and multi-source domain adaptation is proposed by us. First, we design a deep feature aggregation module that introduces a novel approach for extracting EEG hemisphere asymmetry features and integrates these features with the frequency and spatiotemporal characteristics of the EEG signals. Additionally, a multi-source domain adaptation strategy is proposed, where multiple independent feature extraction sub-networks are employed to process each domain separately, extracting discriminative features and thereby alleviating the feature shift problem between domains. Then, a domain adaptation strategy is employed to align multiple source domains with the target domain, thereby reducing inter-domain distribution discrepancies and facilitating effective cross-domain knowledge transfer. Simultaneously, to enhance the learning ability of target samples near the decision boundary, pseudo-labels are dynamically generated for the unlabeled samples in the target domain. By leveraging predictions from multiple classifiers, we calculate the average confidence of each pseudo-label group and select the pseudo-label set with the highest confidence as the final label for the target sample. Finally, the mean of the outputs from multiple classifiers is used as the model's final prediction. A comprehensive set of experiments was performed using the publicly available SEED and SEED-IV datasets. The findings indicate that the method we proposed outperforms alternative methods.