Abstract
Background/Objectives: Electroencephalography (EEG)-based emotion recognition plays an important role in affective computing and brain-computer interface applications. However, existing methods often face the challenge of achieving high classification accuracy while maintaining physiological interpretability. Methods: In this study, we propose a convolutional neural network (CNN) model with a simple architecture for EEG-based emotion classification. The model achieves classification accuracies of 95.21% for low/high arousal, 94.59% for low/high valence, and 93.01% for quaternary classification tasks on the DEAP dataset. To further improve model interpretability and support practical applications, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed to identify the EEG electrode regions that contribute most to the classification results. Results: The visualization reveals that electrodes located in the right prefrontal cortex and left parietal lobe are the most influential, which is consistent with findings from emotional lateralization theory. Conclusions: This provides a physiological basis for optimizing electrode placement in wearable EEG-based emotion recognition systems. The proposed method combines high classification performance with interpretability and provides guidance for the design of efficient and portable affective computing systems.