CLEAR: Multimodal Human Activity Recognition via Contrastive Learning Based Feature Extraction Refinement

CLEAR:基于对比学习的特征提取改进的多模态人体活动识别

阅读:1

Abstract

Human activity recognition (HAR) has become a crucial research area for many applications, such as Healthcare, surveillance, etc. With the development of artificial intelligence (AI) and Internet of Things (IoT), sensor-based HAR has gained increasing attention and presents great advantages to existing work. Relying solely on existing labeled data may not adequately address the challenge of ensuring the model's generalization ability to new data. The 'CLEAR' method is designed to improve the accuracy of multimodal human activity recognition. This approach employs data augmentation, multimodal feature fusion, and contrastive learning techniques. These strategies are utilized to refine and extract highly discriminative features from various data sources, thereby significantly enhancing the model's capacity to identify and classify diverse human activities accurately. CLEAR achieves high generalization performance on unknown datasets using only training data. Furthermore, CLEAR can be directly applied to various target domains without retraining or fine-tuning. Specifically, CLEAR consists of two parts. First, it employs data augmentation techniques in both the time and frequency domains to enrich the training data. Second, it optimizes feature extraction using attention-based multimodal fusion techniques and employs supervised contrastive learning to improve feature discriminability. We achieved accuracy rates of 81.09%, 90.45%, and 82.75% on three public datasets USC-HAD, DSADS, and PAMAP2, respectively. Additionally, when the training data are reduced from 100% to 20%, the model's accuracy on the three datasets decreases by only about 5%, demonstrating that our model possesses strong generalization capabilities. Additionally, when the training data are reduced from 100% to 20%, the model's accuracy on the three datasets decreases by only about 5%, demonstrating that our model possesses strong generalization capabilities.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。