论文标题

自动网络适应超低均匀精确量化

Automatic Network Adaptation for Ultra-Low Uniform-Precision Quantization

论文作者

Park, Seongmin, Kwon, Beomseok, Lim, Jieun, Sim, Kyuyoung, Kim, Tae-Ho, Choi, Jungwook

论文摘要

均匀精确的神经网络量化已获得流行,因为它简化了高度计算能力的密集包装算术单元。然而,它忽略了对整个层的量化误差影响的异质敏感性,从而导致了最佳的推理准确性。这项工作提出了一种新型的神经结构搜索,称为神经通道扩展,该搜索调整网络结构以减轻超低均匀精确量化的准确性降解。所提出的方法选择性地扩展了量化敏感层的通道,同时满足硬件约束(例如flops,params)。基于深入的分析和实验,我们证明了所提出的方法可以适应几个流行的网络通道,以实现CIFAR10和Imagenet上的卓越的2位量化精度。特别是,我们实现了具有较小拖鞋和参数大小的2位RESNET50的最佳top-1/top-5精度。

Uniform-precision neural network quantization has gained popularity since it simplifies densely packed arithmetic unit for high computing capability. However, it ignores heterogeneous sensitivity to the impact of quantization errors across the layers, resulting in sub-optimal inference accuracy. This work proposes a novel neural architecture search called neural channel expansion that adjusts the network structure to alleviate accuracy degradation from ultra-low uniform-precision quantization. The proposed method selectively expands channels for the quantization sensitive layers while satisfying hardware constraints (e.g., FLOPs, PARAMs). Based on in-depth analysis and experiments, we demonstrate that the proposed method can adapt several popular networks channels to achieve superior 2-bit quantization accuracy on CIFAR10 and ImageNet. In particular, we achieve the best-to-date Top-1/Top-5 accuracy for 2-bit ResNet50 with smaller FLOPs and the parameter size.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源