Abstract
Three-factor learning rules in spiking neural networks (SNNs) have emerged as a crucial extension of traditional Hebbian learning and spike-timing-dependent plasticity (STDP), incorporating neuromodulatory signals to improve adaptation and learning efficiency. These mechanisms enhance biological plausibility and facilitate improved credit assignment in artificial neural systems. This paper considers this topic from a machine learning perspective, providing an overview of recent advances in three-factor learning and discussing theoretical foundations, algorithmic implementations, and their relevance to reinforcement learning and neuromorphic computing. In addition, we explore interdisciplinary approaches, scalability challenges, and potential applications in robotics, cognitive modeling, and artificial intelligence (AI) systems. Finally, we highlight key research gaps and propose future directions for bridging the gap between neuroscience and AI.