A multi-model deep learning approach for human emotion recognition

一种用于人类情感识别的多模型深度学习方法

阅读:1

Abstract

Emotion recognition is a difficult problem mainly because emotions are presented in different modalities including; speech, face, and text. In light of this, in this paper, we introduce a novel framework known as Audio, Visual, and Text Emotions Fusion Network that will enhance the approaches to analyzing emotions that can incorporate these dissimilar types of inputs efficiently for the enhancement of the existing approaches to analyzing emotions. Using specialized techniques, each modality in this framework shows Graph Attention Network-based Transformer Network by employing Graph Attention Networks to detect dependencies in facial regions; Hybrid Wav2Vec 2.0 and Convolutional Neural Network combines Wav2Vec 2.0, and Convolutional Neural Network to extract informative temporal and frequency domain audio features. Contextual and sequential text semantics are captured by Bidirectional Encoder Representations from Transformers with Bidirectional Gated Recurrent Unit. They are fused based on a novel attention-based mechanism that distributes weights depending on the emotional context and improves cross-modal interactions. Moreover, the Audio, Visual, and Text Emotions Fusion Network system effectively identifies emotions, and the result section that contains overall accuracy at 98.7%, precision at 98.2%, recall, at 97.2%, and F1-score of 97.49% makes the proposed approach strong and efficient for real-time emotion recognition strategies.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。