AI-Based Vehicle State Estimation Using Multi-Sensor Perception and Real-World Data

基于多传感器感知和真实世界数据的AI车辆状态估计

阅读:1

Abstract

With the rise of vehicle automation, accurate estimation of driving dynamics has become crucial for ensuring safe and efficient operation. Vehicle dynamics control systems rely on these estimates to provide necessary control variables for stabilizing vehicles in various scenarios. Traditional model-based methods use wheel-related measurements, such as steering angle or wheel speed, as inputs. However, under low-traction conditions, e.g., on icy surfaces, these measurements often fail to deliver trustworthy information about the vehicle states. In such critical situations, precise estimation is essential for effective system intervention. This work introduces an AI-based approach that leverages perception sensor data, specifically camera images and lidar point clouds. By using relative kinematic relationships, it bypasses the complexities of vehicle and tire dynamics and enables robust estimation across all scenarios. Optical and scene flow are extracted from the sensor data and processed by a recurrent neural network to infer vehicle states. The proposed method is vehicle-agnostic, allowing trained models to be deployed across different platforms without additional calibration. Experimental results based on real-world data demonstrate that the AI-based estimator presented in this work achieves accurate and robust results under various conditions. Particularly in low-friction scenarios, it significantly outperforms conventional model-based approaches.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。