论文标题

通过片上学习的深度尖峰神经网络的连接修剪

Connection Pruning for Deep Spiking Neural Networks with On-Chip Learning

论文作者

Nguyen, Thao N. N., Veeravalli, Bharadwaj, Fong, Xuanyao

论文摘要

长期的训练时间阻碍了深,大规模的尖峰神经网络(SNN)的潜力,其片上学习能力将在嵌入式系统硬件上实现。我们的工作提出了一种新型的连接修剪方法,该方法可以在片上尖峰时序依赖性可塑性(STDP)的学习中应用,以优化深SNN的学习时间和网络连接。我们将方法应用于第一次尖峰(TTFS)编码,并在片上学习中成功实现了2.1倍的速度,并节省了64%的能源,并使网络连接降低了92.83%,而不会引起任何准确的损失。此外,连通性降低会导致2.83倍加速,并节省78.24%的能源。对我们在现场可编程门阵列(FPGA)平台上的拟议方法的评估显示,需要0.56%的功率开销来实施修剪算法。

Long training time hinders the potential of the deep, large-scale Spiking Neural Network (SNN) with the on-chip learning capability to be realized on the embedded systems hardware. Our work proposes a novel connection pruning approach that can be applied during the on-chip Spike Timing Dependent Plasticity (STDP)-based learning to optimize the learning time and the network connectivity of the deep SNN. We applied our approach to a deep SNN with the Time To First Spike (TTFS) coding and has successfully achieved 2.1x speed-up and 64% energy savings in the on-chip learning and reduced the network connectivity by 92.83%, without incurring any accuracy loss. Moreover, the connectivity reduction results in 2.83x speed-up and 78.24% energy savings in the inference. Evaluation of our proposed approach on the Field Programmable Gate Array (FPGA) platform revealed 0.56% power overhead was needed to implement the pruning algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源