论文标题

培训端到端的单图生成器没有gan

Training End-to-end Single Image Generators without GANs

论文作者

Vinker, Yael, Zabari, Nir, Hoshen, Yedid

论文摘要

我们提出了Augurone,这是一种训练单图生成模型的新型方法。我们的方法使用(单个)输入图像的非携带增强量训练一个升级的神经网络,尤其是包括非刚性薄板样型样物扭曲。广泛的增强大大增加了UPS采样网络的样本内分布,从而使高度可变输入的升级。共同学习紧凑的潜在空间,允许受控的图像合成。与单图像gan不同,我们的方法不需要GAN训练,并且以端到端的方式进行,可以快速,稳定的培训。我们通过实验评估我们的方法,并表明它获得了令人信服的单图像的新颖动画,以及有条件生成任务的最新性能,例如绘画对象和边缘图像。

We present AugurOne, a novel approach for training single image generative models. Our approach trains an upscaling neural network using non-affine augmentations of the (single) input image, particularly including non-rigid thin plate spline image warps. The extensive augmentations significantly increase the in-sample distribution for the upsampling network enabling the upscaling of highly variable inputs. A compact latent space is jointly learned allowing for controlled image synthesis. Differently from Single Image GAN, our approach does not require GAN training and takes place in an end-to-end fashion allowing fast and stable training. We experimentally evaluate our method and show that it obtains compelling novel animations of single-image, as well as, state-of-the-art performance on conditional generation tasks e.g. paint-to-image and edges-to-image.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源