Abstract
EEG signal is being widely used in the field of emotion recognition, which currently suffers from the difficulty of obtaining highly distinguishable features. We propose CNN-BiLSTM-CS for emotion recognition EEG-based, which is to address the shortcomings of the traditional LSTM unidirectional propagation and Softmax supervised model in feature extraction. The method firstly employs BiLSTM to CNN, which can bilaterally obtain emotion feature information, and then introduces Center and Softmax to form a joint loss function to minimize the intra-class distance and maximize the inter-class distance, which can improve the recognition ability. DEAP and SEED dataset are employed to test the performance of CNN-BiLSTM-CS. The results of the average accuracy of valence and arousal are 94.22% and 92.16% on DEAP, which is increase by almost 6% to CNN-LSTM. The triple categorization accuracy of the SEED dataset is 95.45%. CNN-BiLSTM-CS significantly improves the recognition performance of deep features of EEG through the improved network structure and combined loss function.