论文标题
EasyQuant:通过比例优化的训练后量化
EasyQuant: Post-training Quantization via Scale Optimization
论文作者
论文摘要
8位量化已被广泛应用于在各种深度学习应用中加速网络推断。有两种量化方法,基于训练的量化和训练后量化。基于培训的方法患有繁琐的训练过程,而训练后量化可能会导致无法接受的准确性下降。在本文中,我们通过刻度优化提出了一种高效且简单的训练方法,称为EasyQuant(EQ),可以通过基于训练的方法获得可比精度。特别是,我们首先交替优化卷积输出的所有图层目标的权重和激活的比例,以进一步获得高量化的精确度。然后,我们将位宽度降低到INT7的重量和激活,并采用INT16中间存储和整数Winograd卷积实现来加速推理。实现各种计算机视觉任务的实现结果表明,EQ表明EQ优于强度的方法,并且可以在7位宽度后的7位宽度宽度后实现接近INT8的准确性。
The 8 bits quantization has been widely applied to accelerate network inference in various deep learning applications. There are two kinds of quantization methods, training-based quantization and post-training quantization. Training-based approach suffers from a cumbersome training process, while post-training quantization may lead to unacceptable accuracy drop. In this paper, we present an efficient and simple post-training method via scale optimization, named EasyQuant (EQ),that could obtain comparable accuracy with the training-based method.Specifically, we first alternately optimize scales of weights and activations for all layers target at convolutional outputs to further obtain the high quantization precision. Then, we lower down bit width to INT7 both for weights and activations, and adopt INT16 intermediate storage and integer Winograd convolution implementation to accelerate inference.Experimental results on various computer vision tasks show that EQ outperforms the TensorRT method and can achieve near INT8 accuracy in 7 bits width post-training.