Abstract
Remote diagnosis is increasingly incorporating emotion recognition, enabling clinicians to assess patients’ emotional states during teleconsultations through analysis of vocal and acoustic characteristics. This study proposes a refined deep learning framework for emotion recognition from speech signals, designed to enhance the reliability of remote medical assessments. Several deep learning architectures, including convolutional neural networks (CNN), recurrent neural networks (RNN), and long short-term memory (LSTM) models, were evaluated using three publicly available emotional speech datasets: RAVDESS, TESS, and SAVEE. The primary contribution of this work is the Stacked Convolutional Network (SCoNN), a deep neural architecture developed to hierarchically extract and integrate complex audio features for improved emotion classification. The model comprises multiple Conv1D blocks incorporating batch normalization, dropout, and activation layers, followed by a dense softmax output layer for final classification. SCoNN achieved accuracies of 99.93% on the TESS dataset using combined MFCC and Mel Spectrogram features; 91.51%, 90.63%, and 93.30% on the RAVDESS dataset for Mel Spectrogram, MFCC, and combined features, respectively; and 91.43%, 94.76%, and 95.00% on the SAVEE dataset for the same feature configurations. The novelty of SCoNN lies in its hierarchical stacking mechanism and adaptive multi-feature fusion, enabling superior capture of emotional variations in speech compared to conventional deep CNNs. The proposed framework demonstrates high efficiency and reliability for emotion recognition in remote healthcare applications.