Indoor Pedestrian Positioning Method Based on Ultra-Wideband with a Graph Convolutional Network and Visual Fusion

基于超宽带、图卷积网络和视觉融合的室内行人定位方法

阅读:1

Abstract

To address the challenges of low accuracy in indoor positioning caused by factors such as signal interference and visual distortions, this paper proposes a novel method that integrates ultra-wideband (UWB) technology with visual positioning. In the UWB positioning module, the powerful feature-extraction ability of the graph convolutional network (GCN) is used to integrate the features of adjacent positioning points and improve positioning accuracy. In the visual positioning module, the residual results learned from the bidirectional gate recurrent unit (Bi-GRU) network are compensated into the mathematical visual positioning model's solution results to improve the positioning results' continuity. Finally, the two positioning coordinates are fused based on particle filter (PF) to obtain the final positioning results and improve the accuracy. The experimental results show that the positioning accuracy of the proposed UWB positioning method based on a GCN is less than 0.72 m in a single UWB positioning, and the positioning accuracy is improved by 55% compared with the Chan-Taylor algorithm. The proposed visual positioning method based on Bi-GRU and residual fitting has a positioning accuracy of 0.42 m, 71% higher than the Zhang Zhengyou visual positioning algorithm. In the fusion experiment, 80% of the positioning accuracy is within 0.24 m, and the maximum error is 0.66 m. Compared with the single UWB and visual positioning, the positioning accuracy is improved by 56% and 52%, respectively, effectively enhancing indoor pedestrian positioning accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。