论文标题

N位的学习框架量化了FPGA的神经网络

A Learning Framework for n-bit Quantized Neural Networks toward FPGAs

论文作者

Chen, Jun, Liu, Liang, Liu, Yong, Zeng, Xianfang

论文摘要

量化的神经网络(QNN)是一种有效的网络压缩方法,可以广泛用于FPGA的实现。本文为N位QNN提出了一个新颖的学习框架,其权重限制为两个功率。为了解决梯度消失的问题,我们提出了QNN的重建梯度函数,可以直接获得实际梯度,而不是估计预期损失的近似梯度。我们还提出了一种名为N-BQ-NN的新型QNN结构,该结构使用Shift Operation替换多重操作,并且更适合于FPGA的推断。此外,我们还设计了一个移动矢量处理元件(SVPE)数组,以替换所有16位乘法用FPGAS上的卷积操作中的移动操作替换所有16位乘法。我们还进行了可比的实验以评估我们的框架。实验结果表明,通过我们的学习框架,Resnet,Densenet和Alexnet的量化模型可以通过原始的完整模型实现几乎相同的精度。此外,与典型的低精确QNN相比,使用我们的学习框架从头开始训练我们的N-BQ-NN。 Xilinx ZCU102平台上的实验表明,我们的SVPE的N-BQ-NN可以比推理中的向量处理元素(VPE)快2.9倍。由于我们的SVPE阵列中的偏移操作不会消耗FPGA上的数字信号处理(DSP)资源,因此实验表明,使用SVPE阵列的使用也将平均能耗降低到具有16位VPE阵列的68.7%。

The quantized neural network (QNN) is an efficient approach for network compression and can be widely used in the implementation of FPGAs. This paper proposes a novel learning framework for n-bit QNNs, whose weights are constrained to the power of two. To solve the gradient vanishing problem, we propose a reconstructed gradient function for QNNs in back-propagation algorithm that can directly get the real gradient rather than estimating an approximate gradient of the expected loss. We also propose a novel QNN structure named n-BQ-NN, which uses shift operation to replace the multiply operation and is more suitable for the inference on FPGAs. Furthermore, we also design a shift vector processing element (SVPE) array to replace all 16-bit multiplications with SHIFT operations in convolution operation on FPGAs. We also carry out comparable experiments to evaluate our framework. The experimental results show that the quantized models of ResNet, DenseNet and AlexNet through our learning framework can achieve almost the same accuracies with the original full-precision models. Moreover, when using our learning framework to train our n-BQ-NN from scratch, it can achieve state-of-the-art results compared with typical low-precision QNNs. Experiments on Xilinx ZCU102 platform show that our n-BQ-NN with our SVPE can execute 2.9 times faster than with the vector processing element (VPE) in inference. As the SHIFT operation in our SVPE array will not consume Digital Signal Processings (DSPs) resources on FPGAs, the experiments have shown that the use of SVPE array also reduces average energy consumption to 68.7% of the VPE array with 16-bit.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源