HAVEN: Haptic And Visual Environment Navigation by a Shape-Changing Mobile Robot with Multimodal Perception

HAVEN:基于多模态感知的可变形移动机器人的触觉和视觉环境导航

阅读:1

Abstract

Many animals exhibit agile mobility in obstructed environments due to their ability to tune their bodies to negotiate and manipulate obstacles and apertures. Most mobile robots are rigid structures and avoid obstacles where possible. In this work, we introduce a new framework named Haptic And Visual Environment Navigation (HAVEN) Architecture to combine vision and proprioception for a deformable mobile robot to be more agile in obstructed environments. The algorithms enable the robot to be autonomously (a) predictive by analysing visual feedback from the environment and preparing its body accordingly, (b) reactive by responding to proprioceptive feedback, and (c) active by manipulating obstacles and gap sizes using its deformable body. The robot was tested approaching differently sized apertures in obstructed environments ranging from greater than its shape to smaller than its narrowest possible size. The experiments involved multiple obstacles with different physical properties. The results show higher navigation success rates and an average 32% navigation time reduction when the robot actively manipulates obstacles using its shape-changing body.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。