A hyperconformal dual-modal metaskin for well-defined and high-precision contextual interactions

一种用于实现明确定义且高精度上下文交互的超适形双模态超皮肤

阅读:1

Abstract

Proprioception and touch serve as complementary sensory modalities to coordinate hand kinematics and recognize users' intent for precise interactions. However, current motion-tracking electronics remain bulky and insufficiently precise. Accurately decoding both is also challenging owing to the mechanical crosstalk of endogenous and exogenous deformations. Here, we report a hyperconformal dual-modal (HDM) metaskin for interactive hand motion interpretation. The metaskin integrates a strongly coupled hydrophilic interface with a two-step transfer strategy to minimize interfacial mechanical losses. The 10-μm-scale hyperconformal film is highly sensitive to intricate skin stretches while minimizing signal distortion. It accurately tracks skin stretches as well as touch locations and translates them into polar signals, which are individually salient. This approach enables a differentiable signaling topology within one single data channel without burdening structural complexity to the metaskin. When combined with temporal differential calculations and time-series machine learning network, the metaskin extracts interactive context and action cues from the low-dimensional data. This phenomenon is further exemplified through demonstrations in contextual navigation, typing and control integration, and multi-scenario object interaction. We demonstrate this fundamental approach in advanced skin-integrated electronics, highlighting its potential for instinctive interaction paradigms and paving the way for augmented somatosensation recognition.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。