Enhancing real-time heading estimation for pedestrian navigation via deep learning and smartphone embedded sensors

利用深度学习和智能手机嵌入式传感器增强行人导航的实时航向估计

阅读:1

Abstract

The accurate smartphone-based pedestrian navigation significantly depends on the precise heading estimation. However, heading estimation is still a challenging problem in most pedestrian navigation applications because of the bias of low-cost smartphone sensors, thermal drift with long-term operation, and the unexpected changes in the carrying mode of handheld devices. To address these challenges, many existing methods based on pervasive resources encounter severe errors. Conversely, auxiliary resources-based approaches may hinder ubiquitous and seamless indoor-outdoor navigation experiences. This research aims to enhance heading estimation by leveraging pervasive measurements such as LVGOs and straight-line features self-recognized from camera images. The proposed method mitigates the accumulated gyro drift using the absolute heading angle estimated by LVGOs. However, these absolute angles are highly prone to false estimation while navigating near areas with high electric and magnetic activities due to stable geomagnetism anomalies. Encouraged by the pervasiveness of straight-line features in indoor and outdoor environments, we developed a deep learning method-based visual tracking of these features to enhance the gyroscope and magnetic field fusion-based heading estimation. A convolutional neural network was developed using a U-Net network to accurately and quickly recognize these features, then leverage them as heading constraint to overcome long-term gyro drift and short-term compass heading bias. The proposed method superiorly ensured the balance between recognition time delay and precision, which enabled smooth real-time performance. The achieved results improved the heading estimation and could provide significant help, especially for visually impaired people, as they mostly track tactile paving. This encourages future tests and assessments using visually impaired people to reliably include the proposed method in their applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。