Home Robot Interaction Based on EEG Motor Imagery and Visual Perception Fusion

基于脑电运动想象和视觉感知融合的家用机器人交互

阅读:1

Abstract

Amid the intensification of demographic aging, home robots based on intelligent technology have shown great application potential in assisting the daily life of the elderly. This paper proposes a multimodal human-robot interaction system that integrates EEG signal analysis and visual perception, aiming to realize the perception ability of home robots on the intentions and environment of the elderly. Firstly, a channel selection strategy is employed to identify the most discriminative electrode channels based on Motor Imagery (MI) EEG signals; then, the signal representation ability is improved by combining Filter Bank co-Spatial Patterns (FBCSP), wavelet packet decomposition and nonlinear features, and one-to-many Support Vector Regression (SVR) is used to achieve four-class classification. Secondly, the YOLO v8 model is applied for identifying objects within indoor scenes. Subsequently, object confidence and spatial distribution are extracted, and scene recognition is performed using a Machine Learning technique. Finally, the EEG classification results are combined with the scene recognition results to establish the scene-intention correspondence, so as to realize the recognition of the intention-driven task types of the elderly in different home scenes. Performance evaluation reveals that the proposed method attains a recognition accuracy of 83.4%, which indicates that this method has good classification accuracy and practical application value in multimodal perception and human-robot collaborative interaction, and provides technical support for the development of smarter and more personalized home assistance robots.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。