论文标题

通用学到的图像压缩,计算成本低

Universal Learned Image Compression With Low Computational Cost

论文作者

Li, Bowen, Xin, Yao, Bao, Youneng, Meng, Fanyang, Liang, Yongsheng, Tan, Wen

论文摘要

最近,与传统标准(例如JPEG,JPEG2000和BPG)相比,学到的图像压缩方法已经迅速发展,并且表现出了出色的速率延伸性能。但是,基于学习的方法遭受了高计算成本的损失,这对于在资源有限的设备上部署并不有益。为此,我们提出了换档平行模块(SAPMS),包括用于编码器的SAPM-E和解码器的SAPM-D,以大大减少能耗。具体而言,可以将它们视为插件组件,以升级现有的基于CNN的体系结构,与加法分支相比,Shift Branch用于提取大颗粒功能。此外,我们彻底分析了潜图的概率分布,并建议使用拉普拉斯混合物的可能性,以进行更准确的熵估计。实验结果表明,所提出的方法可以在PSNR和MS-SSIM指标上达到可比甚至更好的性能,而卷积对应物的性能则具有约2倍的能量。

Recently, learned image compression methods have developed rapidly and exhibited excellent rate-distortion performance when compared to traditional standards, such as JPEG, JPEG2000 and BPG. However, the learning-based methods suffer from high computational costs, which is not beneficial for deployment on devices with limited resources. To this end, we propose shift-addition parallel modules (SAPMs), including SAPM-E for the encoder and SAPM-D for the decoder, to largely reduce the energy consumption. To be specific, they can be taken as plug-and-play components to upgrade existing CNN-based architectures, where the shift branch is used to extract large-grained features as compared to small-grained features learned by the addition branch. Furthermore, we thoroughly analyze the probability distribution of latent representations and propose to use Laplace Mixture Likelihoods for more accurate entropy estimation. Experimental results demonstrate that the proposed methods can achieve comparable or even better performance on both PSNR and MS-SSIM metrics to that of the convolutional counterpart with an about 2x energy reduction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源