Semantic-Physical Sensor Fusion for Safe Physical Human-Robot Interaction in Dual-Arm Rehabilitation

用于双臂康复中安全人机物理交互的语义-物理传感器融合

阅读:3

Abstract

A safe physical human-robot interaction (pHRI) in rehabilitation requires reliable perception and low-latency decision making under heterogeneous and unreliable sensor inputs. This paper presents a multimodal sensor-fusion-based safety framework that integrates physical state estimation, semantic information fusion, and an edge-deployed large language model (LLM) for real-time pHRI safety control. A dynamics-based virtual sensing method is introduced to estimate internal joint torques from external force-torque measurements, achieving a normalized mean absolute error of 18.5% in real-world experiments. An asynchronous semantic state pool with a time-to-live mechanism is designed to fuse visual, force, posture, and human semantic cues while maintaining robustness to sensor delays and dropouts. Based on structured multimodal tokens, an instruction-tuned edge LLM outputs discrete safety decisions that are further mapped to continuous compliant control parameters. The framework is trained using a hybrid dataset consisting of limited real-world samples and LLM-augmented synthetic data, and evaluated on unseen real and mixed-condition scenarios. Experimental results show reliable detection of safety-critical events with a low emergency misdetection rate, while maintaining an end-to-end decision latency of approximately 223 ms on edge hardware. Real-world experiments on a rehabilitation robot demonstrate effective responses to impacts, user instability, and visual occlusions, indicating the practical applicability of the proposed approach for real-time pHRI safety monitoring.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。