Event-Based Vision at the Edge: A Review

边缘事件驱动视觉:回顾

阅读:1

Abstract

Spiking Neural Networks (SNNs) executed on neuromorphic hardware promise energyefficient, low-latency inference well-suited to edge deployment in size, weight, and powerconstrained environments such as autonomous vehicles, wearable devices, and unmanned aerial platforms. However, a coherent research pathway to deployment of neuromorphic devices remains elusive. This paper presents a structured review and position on the state of SNN-based vision across four interconnected dimensions: network architectures, training methodologies, event-based datasets and simulation techniques, and neuromorphic computing hardware. We survey the evolution from shallow convolutional SNNs to spiking Transformers and hybrid designs which leverage the advantages of SNNs and conventional artificial neural networks. We also examine surrogate gradient training and ANN-to-SNN conversion approaches, catalogue real-world and simulated event-based datasets, and assess the landscape of neuromorphic platforms ranging from rigid mixed-signal architectures to fully-configurable digital systems. Our analysis reveals that while each area has matured considerably in isolation, critical integration challenges persist. In particular, event-based datasets remain scarce and lack standardisation, training methodologies introduce systematic gaps relative to deployment hardware, and access to neuromorphic platforms is restricted by proprietary toolchains and limited development kit availability. We conclude that bridging these integration gaps, rather than advancing individual components alone, represents the most important and least addressed work required to realise the potential of SNN-based vision at the edge.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。