论文标题
端到端学习的图像压缩和固定点重量量化
End-to-end Learned Image Compression with Fixed Point Weight Quantization
论文作者
论文摘要
从编码增益方面,学到的图像压缩(LIC)已达到了传统的手工制作方法,例如JPEG2000和BPG。但是,网络的较大模型大小禁止在资源有限的嵌入式系统上使用LIC。本文介绍了具有8位定点权重的LIC。首先,我们将分组的权重量化,并提出一本非线性内存代码簿。其次,我们探索最佳分组和量化方案。最后,我们开发了一种新颖的重量剪裁微调方案。实验结果表明,量化引起的编码损失很小,而与32位浮点锚相比,大约75%的模型大小可以降低。据我们所知,这是第一项以固定点权重进行全面探索和评估LIC的工作,而我们提出的量化LIC能够以MS-SSIM的优势优于BPG。
Learned image compression (LIC) has reached the traditional hand-crafted methods such as JPEG2000 and BPG in terms of the coding gain. However, the large model size of the network prohibits the usage of LIC on resource-limited embedded systems. This paper presents a LIC with 8-bit fixed-point weights. First, we quantize the weights in groups and propose a non-linear memory-free codebook. Second, we explore the optimal grouping and quantization scheme. Finally, we develop a novel weight clipping fine tuning scheme. Experimental results illustrate that the coding loss caused by the quantization is small, while around 75% model size can be reduced compared with the 32-bit floating-point anchor. As far as we know, this is the first work to explore and evaluate the LIC fully with fixed-point weights, and our proposed quantized LIC is able to outperform BPG in terms of MS-SSIM.