A dataset for environmental sound recognition in embedded systems for autonomous vehicles

用于自动驾驶车辆嵌入式系统环境声音识别的数据集

阅读:1

Abstract

Environmental sound recognition might play a crucial role in the development of autonomous vehicles by mimicking human behavior, particularly in complementing sight and touch to create a comprehensive sensory system. Just as humans rely on auditory cues to detect and respond to critical events such as emergency sirens, honking horns, or the approach of other vehicles and pedestrians, autonomous vehicles equipped with advanced sound recognition capabilities may significantly enhance their situational awareness and decision-making processes. To promote this approach, we extended the UrbanSound8K (US8K) dataset, a benchmark in urban sound classification research, by merging some classes deemed irrelevant for autonomous vehicles into a new class named 'background' and adding the class 'silence' sourced from Freesound.org to complement the dataset. This tailored dataset, named UrbanSound8K for Autonomous Vehicles (US8K_AV), contains 4.94 hours of annotated audio samples with 4,908 WAV files distributed among 6 classes. It supports the development of predictive models that can be deployed in embedded systems like Raspberry Pi.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。