论文标题

神经数据压缩简介

An Introduction to Neural Data Compression

论文作者

Yang, Yibo, Mandt, Stephan, Theis, Lucas

论文摘要

神经压缩是神经网络和其他机器学习方法在数据压缩中的应用。统计机器学习的最新进展为数据压缩开辟了新的可能性,从而可以使用强大的生成模型(例如归一化流量,变异自动化模型,扩散概率模型和生成的对抗网络)从数据端对端学习压缩算法。本文旨在通过审查信息理论的必要背景(例如熵编码,速度渗透理论)和计算机视觉(例如,图像质量评估,感知指标),并通过迄今为止文献中的基本思想和方法提供策划的指南,从而将这一研究领域介绍给更广泛的机器学习受众。

Neural compression is the application of neural networks and other machine learning methods to data compression. Recent advances in statistical machine learning have opened up new possibilities for data compression, allowing compression algorithms to be learned end-to-end from data using powerful generative models such as normalizing flows, variational autoencoders, diffusion probabilistic models, and generative adversarial networks. The present article aims to introduce this field of research to a broader machine learning audience by reviewing the necessary background in information theory (e.g., entropy coding, rate-distortion theory) and computer vision (e.g., image quality assessment, perceptual metrics), and providing a curated guide through the essential ideas and methods in the literature thus far.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源