Abstract
Emotion recognition is a difficult problem mainly because emotions are presented in different modalities including; speech, face, and text. In light of this, in this paper, we introduce a novel framework known as Audio, Visual, and Text Emotions Fusion Network that will enhance the approaches to analyzing emotions that can incorporate these dissimilar types of inputs efficiently for the enhancement of the existing approaches to analyzing emotions. Using specialized techniques, each modality in this framework shows Graph Attention Network-based Transformer Network by employing Graph Attention Networks to detect dependencies in facial regions; Hybrid Wav2Vec 2.0 and Convolutional Neural Network combines Wav2Vec 2.0, and Convolutional Neural Network to extract informative temporal and frequency domain audio features. Contextual and sequential text semantics are captured by Bidirectional Encoder Representations from Transformers with Bidirectional Gated Recurrent Unit. They are fused based on a novel attention-based mechanism that distributes weights depending on the emotional context and improves cross-modal interactions. Moreover, the Audio, Visual, and Text Emotions Fusion Network system effectively identifies emotions, and the result section that contains overall accuracy at 98.7%, precision at 98.2%, recall, at 97.2%, and F1-score of 97.49% makes the proposed approach strong and efficient for real-time emotion recognition strategies.