Internet of things enabled indoor activity monitoring for visually impaired people with hybrid deep learning and optimized algorithms for enhanced safety

物联网技术结合混合深度学习和优化算法,为视障人士提供室内活动监测,从而提升安全性。

阅读:1

Abstract

Indoor activity monitoring methods promise the wellbeing and security of elderly and visually challenging individuals living in their homes. These methods use numerous technologies and sensors to monitor daily actions, namely movement, sleep patterns, and medication compliance, presenting valued opinions of the consumer's daily life and complete health. The accuracy and adaptability of the deep learning (DL) model make human activity recognition (HAR) a critical device to improve effectiveness, security, and modified understandings in indoor areas. Using DL techniques, HAR transforms indoor monitoring by permitting particular detection and experience of human actions. DL techniques automatically remove and learn discriminating characteristics, making them suitable for identifying composite human activities in sensor data. Nevertheless, selecting the appropriate DL architecture and enhancing its parameters was important for superior solutions. This study presents a novel Indoor Activity Monitoring for Visually Impaired People with Hybrid Deep Learning for Enhanced Safety (IAMVIP-HDLES) methodology for IoT environments. The IAMVIP-HDLES methodology is designed to monitor and recognize indoor activities of visually impaired people in a real-time environment. The IAMVIP-HDLES approach primarily performs Z-score normalization as a data pre-processing technique for standardizing the input data, guaranteeing uniformity, and improving the system's performance. For feature selection, the Osprey-Cauchy-sparrow search algorithm (OCSSA) is employed to identify the most appropriate features from the raw data. In addition, the hybrid method, which is comprised of a convolutional neural network, bidirectional long short-term memory, and attention mechanisms (CNN-BiLSTM-Attention), is used for monitoring indoor activities. Finally, the hyperparameter selection process based on the improved whale optimization algorithm (IWOA) is performed to enhance the classification results of the CNN-BiLSTM-Attention method. Experimental studies are performed, and the results are inspected using the UCI-HAR dataset. The comparative analysis of the IAMVIP-HDLES method specified a superior accuracy value of 96.76% over existing techniques.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。