论文标题

自回归模型的本地蒙面卷积

Locally Masked Convolution for Autoregressive Models

论文作者

Jain, Ajay, Abbeel, Pieter, Pathak, Deepak

论文摘要

高维生成模型具有许多应用程序,包括图像压缩,多媒体生成,异常检测和数据完成。自然图像的最新估计量是自回归的,将像素上的联合分布分解为由深神经网络参数(例如卷积神经网络,例如PixelCNN。但是,PixelCNNS仅对关节的单个分解进行建模,并且只有一代顺序是有效的。对于诸如图像完成之类的任务,这些模型无法使用许多观察到的上下文。为了以任意订单生成数据,我们介绍了LMCONV:对标准2D卷积的简单修改,该修改允许将任意掩码应用于图像中每个位置的权重。使用LMCONV,我们学习了共享参数但发电顺序不同的分布估计器集合,从而提高了整个图像密度估计的性能(无条件CIFAR10上的2.89 bpd),以及全球连贯的图像完成。我们的代码可在https://ajayjain.github.io/lmconv上找到。

High-dimensional generative models have many applications including image compression, multimedia generation, anomaly detection and data completion. State-of-the-art estimators for natural images are autoregressive, decomposing the joint distribution over pixels into a product of conditionals parameterized by a deep neural network, e.g. a convolutional neural network such as the PixelCNN. However, PixelCNNs only model a single decomposition of the joint, and only a single generation order is efficient. For tasks such as image completion, these models are unable to use much of the observed context. To generate data in arbitrary orders, we introduce LMConv: a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image. Using LMConv, we learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation (2.89 bpd on unconditional CIFAR10), as well as globally coherent image completions. Our code is available at https://ajayjain.github.io/lmconv.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源