Efficient event-based delay learning in spiking neural networks

脉冲神经网络中高效的基于事件的延迟学习

阅读:3

Abstract

Spiking Neural Networks compute using sparse communication and are attracting increased attention as a more energy-efficient alternative to traditional Artificial Neural Networks. While standard Artificial Neural Networks are stateless, spiking neurons are stateful and hence intrinsically recurrent, making them well-suited for spatio-temporal tasks. However, the duration of this intrinsic memory is limited by synaptic and membrane time constants. Delays are a powerful additional mechanism and, in this paper, we propose an event-based training method for Spiking Neural Networks with delays, grounded in the EventProp formalism, which enables the calculation of exact gradients with respect to weights and delays. Our method supports multiple spikes per neuron and introduces a delay learning algorithm that can, in contrast to previous methods, also be applied to recurrent Spiking Neural Networks. We evaluate our method on a simple sequence detection task, as well as the Yin-Yang, Spiking Heidelberg Digits, Spiking Speech Commands and Braille letter reading datasets, demonstrating that our algorithm can optimise delays from suboptimal initial conditions and enhance classification accuracy compared to architectures without delays. We also find that recurrent delays are particularly beneficial in small networks. Finally, we show that our approach uses less than half the memory of the current state-of-the-art delay-learning method and is up to 26 × faster.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。