An interactive information based DCNN-BiLSTM model with dual attention mechanism for facial expression recognition

一种基于交互式信息的、具有双重注意力机制的DCNN-BiLSTM人脸表情识别模型

阅读:1

Abstract

Human's facial expressions and emotions have direct impact on their action and decision-making abilities. Basic CNN models are complexity of speeding up the operation to minimize the complexity. In this paper, we have proposed a Deep Convolutional Neural Networks along with Bi-Long Short Term Memory, which is followed by a single and cross-fusion attention mechanism for gathering both spatial and channel information from feature vector maps. Piecewise Cubic Polynomial and linear activation function was used to speed up Interactive Learning Information (ILI). Global Average Pooling (GAP) computes weights for feature vector maps; softmax classifier is used to classify images into 7 classes based on the expression present on the input images. The proposed model's performance was compared with benchmarking methods like NGO-BiLSTM, ICNN-BiLSTM and HCNN-LSTM. The proposed model resulted with better accuracy than other methods with 82.89%, 96.78%, 95.78%, and 95.87% on FER 2013, CK+, RAF-DB and JAFFE datasets and also resulted in lower False Recognition Rate (FAR) of 7.23%, 1.42%, 1.96% and 1.78% on all four datasets respectively. The proposed model has performed well than other benchmarking models with high Genuine Recognition Rate (GAR) of 88.57% on FER2013, 97.23% on CK+, 96.87% on RAF-DB and 96.32% on JAFFE datasets respectively.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。