Stacked convolutional neural network for emotion recognition using multi feature speech analysis

基于多特征语音分析的堆叠卷积神经网络用于情感识别

阅读:2

Abstract

Remote diagnosis is increasingly incorporating emotion recognition, enabling clinicians to assess patients’ emotional states during teleconsultations through analysis of vocal and acoustic characteristics. This study proposes a refined deep learning framework for emotion recognition from speech signals, designed to enhance the reliability of remote medical assessments. Several deep learning architectures, including convolutional neural networks (CNN), recurrent neural networks (RNN), and long short-term memory (LSTM) models, were evaluated using three publicly available emotional speech datasets: RAVDESS, TESS, and SAVEE. The primary contribution of this work is the Stacked Convolutional Network (SCoNN), a deep neural architecture developed to hierarchically extract and integrate complex audio features for improved emotion classification. The model comprises multiple Conv1D blocks incorporating batch normalization, dropout, and activation layers, followed by a dense softmax output layer for final classification. SCoNN achieved accuracies of 99.93% on the TESS dataset using combined MFCC and Mel Spectrogram features; 91.51%, 90.63%, and 93.30% on the RAVDESS dataset for Mel Spectrogram, MFCC, and combined features, respectively; and 91.43%, 94.76%, and 95.00% on the SAVEE dataset for the same feature configurations. The novelty of SCoNN lies in its hierarchical stacking mechanism and adaptive multi-feature fusion, enabling superior capture of emotional variations in speech compared to conventional deep CNNs. The proposed framework demonstrates high efficiency and reliability for emotion recognition in remote healthcare applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。