论文标题
通过多次尖峰复发神经网络有效而有效的计算
Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
论文作者
论文摘要
作为边缘AI范式的脑启发的神经形态计算的出现正在促使人们寻找高性能和高效的尖峰神经网络,以在此硬件上运行。但是,与深度学习中的经典神经网络相比,当前的尖峰神经网络在引人注目的领域缺乏竞争性能。在这里,对于顺序和流式任务,我们演示了一种新型的自适应尖峰经常性神经网络(SRNN)如何与其他尖峰神经网络相比能够实现最先进的性能,并且几乎达到或超过了经典的经典神经网络(RNN)的性能,同时展示了稀疏活动。 From this, we calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks. To achieve this, we model standard and adaptive multiple-timescale spiking neurons as self-recurrent neural units, and leverage surrogate gradients and auto-differentiation in the PyTorch Deep Learning framework to efficiently implement backpropagation-through-time, including learning of the important spiking neuron parameters to adapt our spiking neurons to the tasks.
The emergence of brain-inspired neuromorphic computing as a paradigm for edge AI is motivating the search for high-performance and efficient spiking neural networks to run on this hardware. However, compared to classical neural networks in deep learning, current spiking neural networks lack competitive performance in compelling areas. Here, for sequential and streaming tasks, we demonstrate how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance compared to other spiking neural networks and almost reach or exceed the performance of classical recurrent neural networks (RNNs) while exhibiting sparse activity. From this, we calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks. To achieve this, we model standard and adaptive multiple-timescale spiking neurons as self-recurrent neural units, and leverage surrogate gradients and auto-differentiation in the PyTorch Deep Learning framework to efficiently implement backpropagation-through-time, including learning of the important spiking neuron parameters to adapt our spiking neurons to the tasks.