Abstract
Emotion Recognition generally involves the identification of the present mental state or psychological conditions of the human while interacting with others. Among the various modalities, Electroencephalography is the most deceptive emotion recognition technique because of its ability to characterize brain activities accurately. Several emotion recognition methods have been designed utilizing Deep Learning approaches from EEG signals. Yet, their inability to capture the complex features and the occurrence of the overfitting problems with increased computational complexity affected their extensive application. Therefore, this research proposes the Cross-Connected Distributive Learning-enabled Graph Convolutional Network (C2DGCN) for effective emotion recognition. Specifically, the cross-connected distributive learning in the C2DGCN enables extensive feature sharing and integration, thus reducing the computation complexity and improving the accuracy. Further, the application of the Statistical Time-Frequency Signal descriptor aids in the extraction of complex features and mitigates the overfitting issue. The experimental validation revealed the effectiveness of the C2DGCN by achieving a high accuracy of 97.73%, sensitivity of 98.32%, specificity of 98.22%, and precision of 98.32% with 90% of training using the SEED-IV dataset. For the evaluation using the DEAP dataset, the proposed C2DGCN model reaches an accuracy of 97.66%, precision of 97.98%, sensitivity of 97.25%, and specificity of 98.07%.