A VibV Dataset Integrating Vibration and Vision for Enhanced Safety in Self-Driving Tasks

VibV数据集:整合振动和视觉信息,提升自动驾驶任务安全性

阅读:3

Abstract

Due to the complexity of real-world traffic scenarios, autonomous driving systems still face safety challenges and uncontrolled threats in blind spots. Currently, it primarily relies on cameras, LiDAR, radar, and their fusion to perceive the environment. However, under special road conditions or extreme weather, there may exhibit defects, resulting in false or missed detections, which can lead to safety accidents. This paper proposes the VibV dataset, which introduces vehicle vibration signals into perception system. By utilizing vibration information as supervisory signals for the detection system, it enhances perception accuracy and thereby improves safety. This dataset recorded vibration signals and vision data simultaneously in scenes such as rumble strips and speed bumps. It performed a total of 39 experiments over two months, resulting in 39 segments of vibration data and 22,677 original video frames. The vibration signals underwent preliminary processing, and the images were manually annotated and classified. Technical evaluations have proven the dataset's usability and reliability. It can be applied to various autonomous driving tasks to enhance safety and robustness.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。