Smart comprehend gesture based emotions recognition system for people with hearing disability utilizing spatio temporal graph convolutional network techniques

利用时空图卷积网络技术,为听障人士开发智能理解手势情绪识别系统

阅读:2

Abstract

Sign language (SL) is essential for communication among individuals with hearing or speech impairments. Facial emotion recognition plays a crucial role in improving expression analysis and assistive technologies when integrated with SL, specifically in fields like patient monitoring, psychoanalysis, and human-computer interaction. These applications directly assist the development of intelligent systems for communication and care. Gesture recognition (GR) can facilitate communication between machines and humans by enhancing this type of interaction. Machine learning (ML) is a part of artificial intelligence (AI) that focuses on developing methods that rely on data. The primary challenge in gesture detection is the machine’s inability to instantly interpret human language. It is crucial for enabling communication, specifically for the deaf and elderly, by allowing them to give commands through gestures. Therefore, this article presents a novel Smart Comprehend Gesture-Based Emotions Recognition System Utilising Spatio-Temporal Graph Convolutional Network (SCGERS-STGCN) approach for individuals with hearing disabilities. The SCGERS-STGCN approach enables the recognition of gestures and emotions to enhance communication for individuals with hearing impairments. Initially, the SCGERS-STGCN model utilizes Gaussian filtering (GF) in the image pre-processing stage to reduce noise and improve the quality of input images. For feature extraction, the Vision Transformer (ViT) model is utilized to capture complex patterns and relationships within the gestures and facial expressions that indicate emotions. Additionally, the spatio-temporal graph convolutional network (ST-GCN) approach is employed for facial emotion detection and classification. Finally, the parameter tuning of the ST-GCN model is performed using the developed African vulture optimization algorithm (DAVOA) model. The experimentation of the SCGERS-STGCN model is performed under the Emotion detection dataset. The comparison analysis of the SCGERS-STGCN model revealed a superior accuracy value of 98.53% compared to existing techniques.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。