Abstract
Driven by advancements in artificial intelligence technology, pedestrian trajectory prediction is shifting from traditional machine learning methods toward autonomous decision-making frameworks based on neural networks. However, the spatiotemporal uncertainty of pedestrian movement results in low accuracy of existing prediction models. To address this issue, we propose a multi-source perception fusion system based on INS-GNSS wearable devices. By integrating high-precision inertial measurement units (IMUs) and multi-mode global navigation satellite systems (GNSS), we enhance localization and prediction accuracy. For localization, we introduce a Gait Adaptive UKF (Gait-AUKF) that identifies pedestrian gait patterns and motion states by fusing multi-sensor data. An adaptive algorithm effectively suppresses trajectory drift and improves tracking accuracy. For trajectory prediction, we propose a pedestrian trajectory prediction framework based on a multi-source fusion attention mechanism. A GRU encoder extracts pedestrian trajectory features from historical motion data. An attention mechanism assigns varying weights to trajectory features across different scales. An LSTM decoder and A* path planning algorithm constrain spatiotemporal paths to generate future pedestrian trajectories. Experimental results demonstrate that compared to UKF and AKF, the Gait-AUKF reduces eastward error by 30%, northward error by 26.27%, and vertical error by 49.08%. The complete prediction framework achieves a 68.54% reduction in average position error (APE) and a 70.42% reduction in direction error (DE) compared to LSTM and Transformer models. Ablation experiments demonstrate that the integrated Gait-AUKF algorithm and A* path planning algorithm enhance model decision performance. After incorporating these algorithms, the model's ADE decreased by 68.49% and FDE by 71.86%.