论文标题
跨平台学习图像压缩的训练后量化
Post-Training Quantization for Cross-Platform Learned Image Compression
论文作者
论文摘要
已经看到,学到的图像压缩超出了常规图像编码技术的表现,并且在工业应用中往往是实用的。最关键的问题之一是非确定性计算,这使得概率预测跨平台不一致并挫败了成功解码。我们建议通过引入发达的训练后量化并使模型推理整数仅使用,这比目前现有的培训和基于微调的方法更简单,但仍可以保持学习图像压缩的优越率率表现,这要解决这个问题。基于此,我们进一步改善了熵参数的离散化,并扩展了确定性推断以适合高斯混合模型。通过我们提出的方法,当前最新的图像压缩模型可以以跨平台一致的方式推断出来,这使得学习的图像压缩的进一步发展和实践更加有前途。
It has been witnessed that learned image compression has outperformed conventional image coding techniques and tends to be practical in industrial applications. One of the most critical issues that need to be considered is the non-deterministic calculation, which makes the probability prediction cross-platform inconsistent and frustrates successful decoding. We propose to solve this problem by introducing well-developed post-training quantization and making the model inference integer-arithmetic-only, which is much simpler than presently existing training and fine-tuning based approaches yet still keeps the superior rate-distortion performance of learned image compression. Based on that, we further improve the discretization of the entropy parameters and extend the deterministic inference to fit Gaussian mixture models. With our proposed methods, the current state-of-the-art image compression models can infer in a cross-platform consistent manner, which makes the further development and practice of learned image compression more promising.