论文标题

梯度$ \ ell_1 $正规化量化鲁棒性

Gradient $\ell_1$ Regularization for Quantization Robustness

论文作者

Alizadeh, Milad, Behboodi, Arash, van Baalen, Mart, Louizos, Christos, Blankevoort, Tijmen, Welling, Max

论文摘要

我们分析了量化权重和神经网络激活对其损失的影响,并得出了一种简单的正则化方案,从而提高了针对训练后量化的鲁棒性。通过培训量化就绪的网络,我们的方法可以存储一组权重,这些权重可以按需按需将其定量为不同的位宽度,以作为应用程序更改的能量和内存要求。 Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for "on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a $\ell_\infty$-bounded perturbation, the first-order term in the loss expansion can be regularized using the $ \ ell_1 $ - 梯度的norm。

We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for "on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a $\ell_\infty$-bounded perturbation, the first-order term in the loss expansion can be regularized using the $\ell_1$-norm of gradients. We experimentally validate the effectiveness of our regularization scheme on different architectures on CIFAR-10 and ImageNet datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源