Abstract
Facial expressions are essential components of human relationships that help us understand the intentions of others. Automated emotion recognition from facial expressions applies to various fields, including human-computer interaction (HCI), healthcare, modern augmented reality, human-robot interaction (HRI), and smart living. Nevertheless, the FER remains challenging due to pose variation, non-uniform illumination, facial accessories, and other factors. Emotion detection using conventional models has shortcomings in the mutual optimization of feature extraction and classification. As a result, artificial intelligence (AI) models are automatically applied for facial emotion recognition (FER). With machine learning (ML) and deep learning (DL) methods, FER enables the automated classification and detection of various emotions, including happiness, anger, sadness, surprise, and more. This paper proposes an Attention-Guided Fusion of Feature Extraction for Emotion Detection through Facial Expressions Using the Marine Predator Algorithm (AFFE-EDFEMPA) approach. The AFFE-EDFEMPA model aims to present an effective system for FER. Bilateral filtering (BF) is initially employed in the image pre-processing stage to improve image quality by eliminating unwanted noise. Furthermore, fusion models such as AlexNet and SqueezeNet implement feature extraction to detect and select relevant features from input data. The AFFE-EDFEMPA model employs a bidirectional long short-term memory with temporal attention (BiLSTM-TA) model for facial emotion classification. Finally, the marine predator algorithm (MPA) optimally adjusts the hyperparameter values of the BiLSTM-TA model, resulting in excellent classification performance. The simulation process is performed to exhibit the promising outcomes of the AFFE-EDFEMPA technique under the FER-2013 and CK + datasets. The experimental validation of the AFFE-EDFEMPA technique illustrated superior accuracy values of 99.22% and 99.13%, outperforming existing models.