论文标题
FSPINN:用于记忆和节能尖峰神经网络的优化框架
FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks
论文作者
论文摘要
尖峰神经网络(SNN)由于其事件驱动的处理而引起了兴趣,该处理可能会在硬件平台中消耗低功率/能量计算,同时由于峰值依赖性可塑性(STDP)规则提供了无监督的学习能力。但是,最新的SNN需要大量的内存足迹才能达到高精度,从而使它们难以在嵌入式系统上部署,例如在电池供电的移动设备和物联网边缘节点上。在此方面,我们提出了FSPINN,这是一个优化框架,用于获得用于训练和推理处理的记忆和节能SNN,并具有无监督的学习能力,同时保持准确性。它是通过(1)减少神经元和STDP操作的计算需求,(2)提高基于STDP的学习的准确性,(3)通过定点量化来压缩SNN,以及(4)将记忆和能量需求纳入优化过程。 FSPINN通过减少神经元操作的数量,基于STDP的突触重量更新和STDP复杂性来降低计算需求。为了提高学习的准确性,FSPINN采用了基于时间段的突触重量更新,并自适应地确定了STDP增强因子和有效的抑制强度。实验结果表明,与最先进的工作相比,FSPINN可节省7.5倍的记忆,并将能源效率平均提高3.5倍,平均培训的培训和1.8倍的平均推理,跨MNIST和时尚MNIST数据集的推理,没有4900个兴奋性神经元的网络损失,从而获得了4900个兴奋性的能量/范围,从而获得了Edge-Edge snns的效率。
Spiking Neural Networks (SNNs) are gaining interest due to their event-driven processing which potentially consumes low power/energy computations in hardware platforms, while offering unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy, thereby making them difficult to be deployed on embedded systems, for instance on battery-powered mobile devices and IoT Edge nodes. Towards this, we propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy. It is achieved by (1) reducing the computational requirements of neuronal and STDP operations, (2) improving the accuracy of STDP-based learning, (3) compressing the SNN through a fixed-point quantization, and (4) incorporating the memory and energy requirements in the optimization process. FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity. To improve the accuracy of learning, FSpiNN employs timestep-based synaptic weight updates, and adaptively determines the STDP potentiation factor and the effective inhibition strength. The experimental results show that, as compared to the state-of-the-art work, FSpiNN achieves 7.5x memory saving, and improves the energy-efficiency by 3.5x on average for training and by 1.8x on average for inference, across MNIST and Fashion MNIST datasets, with no accuracy loss for a network with 4900 excitatory neurons, thereby enabling energy-efficient SNNs for edge devices/embedded systems.