Abstract
BACKGROUND: Spiking Neural Networks (SNNs) have emerged as a promising paradigm in artificial intelligence due to their energy efficiency. However, training SNNs remains a formidable challenge because the nondifferentiable nature of spike activation functions prevents the direct application of conventional backpropagation. Existing surrogate gradient methods often suffer from critical limitations, including gradient mismatch, gradient explosion or vanishing, and high computational overhead. METHODS: In this paper, we propose an Adaptive and Lightweight (AdaLi) backpropagation method to address these issues. AdaLi reduces the computational complexity of the training process by introducing lightweight surrogate gradients and dynamically adjusting gradient update boundaries. Furthermore, it employs an adaptive mechanism to adjust the surrogate gradients based on training epochs, thereby enhancing network stability. The method also provides additional hyperparameters to address gradient mismatch, which can be either manually fine-tuned or automatically determined based on the distribution of spiking neuron membrane potentials. RESULTS: Experimental results on both static and neuromorphic datasets demonstrate that SNNs trained with AdaLi outperform their baseline counterparts in terms of efficiency and accuracy. The stable surrogate gradients in AdaLi effectively mitigate the issues of gradient vanishing and explosion. CONCLUSION: The AdaLi method introduces a novel approach to optimizing gradient calculation and parameter updates in SNNs, paving the way for more effective and accurate training. The source code is available at https://github.com/parania/AdaLi.