Surrogate Gradient Learning in Spiking Neural Networks

Read::

  • Surrogate Gradient Learning in Spiking Neural Networks E.O. Neftci, H. Mostafa, F. Zenke 2019 πŸ›« NA reading citation Print:: ❌ Zotero Link:: NA PDF:: NA

Files:: arXiv.org Snapshot; Neftci et al_2019_Surrogate Gradient Learning in Spiking Neural Networks.pdf Files:: arXiv.org Snapshot; Neftci et al_2019_Surrogate Gradient Learning in Spiking Neural Networks.pdf Reading Note:: E.O. Neftci, H. Mostafa, F. Zenke (2019) Web Rip::

TABLE without id
file.link as "Related Files",
title as "Title",
type as "type"
FROM "" AND -"ZZ. planning"
WHERE citekey = "neftciSurrogateGradientLearning2019" 
SORT file.cday DESC
 
> [!Excerpt] Abstract
> 

Abstract

Spiking neural networks are nature’s versatile solution to fault-tolerant and energy efficient signal processing. To translate these benefits into hardware, a growing number of neuromorphic spiking neural network processors attempt to emulate biological neural networks. These developments have created an imminent need for methods and tools to enable such systems to solve real-world signal processing problems. Like conventional neural networks, spiking neural networks can be trained on real, domain specific data. However, their training requires overcoming a number of challenges linked to their binary and dynamical nature. This article elucidates step-by-step the problems typically encountered when training spiking neural networks, and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting. To that end, it gives an overview of existing approaches and provides an introduction to surrogate gradient methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.

Quick Reference

Top Comments

Let’s say grey is for overall comments

Topics

Further Reading

β€”

Extracted Annotations and Comments

Figures