Attention guided feature fusion using marine predator algorithm for facial emotion recognition

基于海洋捕食者算法的注意力引导特征融合用于面部表情识别

阅读:1

Abstract

Facial expressions are essential components of human relationships that help us understand the intentions of others. Automated emotion recognition from facial expressions applies to various fields, including human-computer interaction (HCI), healthcare, modern augmented reality, human-robot interaction (HRI), and smart living. Nevertheless, the FER remains challenging due to pose variation, non-uniform illumination, facial accessories, and other factors. Emotion detection using conventional models has shortcomings in the mutual optimization of feature extraction and classification. As a result, artificial intelligence (AI) models are automatically applied for facial emotion recognition (FER). With machine learning (ML) and deep learning (DL) methods, FER enables the automated classification and detection of various emotions, including happiness, anger, sadness, surprise, and more. This paper proposes an Attention-Guided Fusion of Feature Extraction for Emotion Detection through Facial Expressions Using the Marine Predator Algorithm (AFFE-EDFEMPA) approach. The AFFE-EDFEMPA model aims to present an effective system for FER. Bilateral filtering (BF) is initially employed in the image pre-processing stage to improve image quality by eliminating unwanted noise. Furthermore, fusion models such as AlexNet and SqueezeNet implement feature extraction to detect and select relevant features from input data. The AFFE-EDFEMPA model employs a bidirectional long short-term memory with temporal attention (BiLSTM-TA) model for facial emotion classification. Finally, the marine predator algorithm (MPA) optimally adjusts the hyperparameter values of the BiLSTM-TA model, resulting in excellent classification performance. The simulation process is performed to exhibit the promising outcomes of the AFFE-EDFEMPA technique under the FER-2013 and CK + datasets. The experimental validation of the AFFE-EDFEMPA technique illustrated superior accuracy values of 99.22% and 99.13%, outperforming existing models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。