论文标题

尖峰神经网络的粗尺度表示:通过尖峰和应用于神经形态硬件的反向传播

Coarse scale representation of spiking neural networks: backpropagation through spikes and application to neuromorphic hardware

论文作者

Yanguas-Gil, Angel

论文摘要

在这项工作中,我们探讨了在等于其绝对耐受时间的时间表上运行的泄漏整合和消防神经元的反复表示。我们的粗大时间尺度近似是使用尖峰到达的概率分布函数获得的,该尖峰到达在此时间间隔内均匀分布。这导致了一个离散表示形式,该表示表现出与连续模型相同的动力学,从而通过复发实现实现了有效的大规模模拟和反向传播。我们使用这种方法来探索直接在Pytorch中直接在Pytorch中的深尖峰神经网络的训练,包括卷积,全部连接性和Maxpool层。我们发现,在训练过程中,复发模型仅使用4长的尖峰列车就可以提高分类的精度。我们还观察到良好的转移回到了漏水和消防神经元的连续实现。最后,我们将这种方法应用于一些标准控制问题,作为使用神经形态芯片探索增强学习的第一步。

In this work we explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period. Our coarse time scale approximation is obtained using a probability distribution function for spike arrivals that is homogeneously distributed over this time interval. This leads to a discrete representation that exhibits the same dynamics as the continuous model, enabling efficient large scale simulations and backpropagation through the recurrent implementation. We use this approach to explore the training of deep spiking neural networks including convolutional, all-to-all connectivity, and maxpool layers directly in Pytorch. We found that the recurrent model leads to high classification accuracy using just 4-long spike trains during training. We also observed a good transfer back to continuous implementations of leaky integrate and fire neurons. Finally, we applied this approach to some of the standard control problems as a first step to explore reinforcement learning using neuromorphic chips.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源