Learning in Smooth Rate and Non-Smooth Spiking Networks

Date
2024-07-08
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis investigates the differences between spiking and firing-rate based neural networks, focusing on learning. Understanding these differences can enhance spiking neural network learning techniques for engineering tasks and contribute to the modeling of learning in biological neural systems, such as in neuromorphic computing applications. The FORCE technique uses an online update to a linearly decoded reservoir network to train both spiking and non-spiking networks to perform dynamical systems tasks. We demonstrate that the FORCE technique effectively trains networks of leaky integrate-and-fire (LIF) neurons and their equivalent instantaneous firing-rate neurons to perform various dynamical systems tasks with correlated neural bases. Our findings indicate that while both network types perform correlated neural computations, firing-rate networks achieve more finely tuned solutions that are not always transferable to LIF networks. However, the solution learned by the LIF networks were largely transferable to the firing-rate networks. We explore error scaling as a function of network size, revealing that FORCE trained spiking networks likely use a noisy rate encoding, while firing-rate networks utilize the neural basis more efficiently. Analyzing the time-averaged cross-trial variance and bias squared in these networks, we find that spiking networks’ output error is primarily due to variability in the spiking neural basis, whereas firing-rate networks are limited by the expressiveness of their neural basis and decoder. Our results suggest that to avoid approximating a cross-trial averaged firing rate in LIF networks, stabilizing spike times relative to the learning task or using an error function different from the L2 norm is necessary. Additionally, we examine the effects of noisy conditions on spike-train decoding error. Precise and reliable spike trains can achieve efficient encoding, but perturbations in spike timing or spike failures disrupt this efficiency. Reducing spike time jitter linearly in network size and failure rates at least in the square root of network size is necessary to maintain efficient error scaling.
Description
Keywords
Spiking Neural Networks, Computaional Neuroscience, Firing-Rate Networks
Citation
Newton, T. R. (2024). Learning in smooth rate and non-smooth spiking networks (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.