论文标题
差异:低维潜在的有效,可控制和高保真产生
DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents
论文作者
论文摘要
已经证明,扩散概率模型可以在几个竞争图像合成基准上产生最先进的结果,但缺乏低维,可解释的潜在空间,并且生成缓慢。另一方面,标准的变分自动编码器(VAE)通常可以使用低维的潜在空间,但样品质量较差。我们提出了一种新颖的生成框架,将VAE整合在扩散模型框架中,并利用它为扩散模型设计新颖的条件参数化。我们表明,所得模型将扩散模型与低维的VAE相称,可推断潜在代码,可用于诸如可控合成之类的下游任务。提出的方法还改进了标准无条件DDPM/DDIM型号所展示的速度与质量折衷(例如,使用标准DDIM使用Celeba-HQ-128基准上的标准DDIM使用t = 10反向过程步骤使用标准DDIM使用标准DDIM,而无需明确的培训以实现此类目标。此外,所提出的模型表现出与标准图像合成基准(如CIFAR-10和Celeba-64)相当的合成质量,同时比大多数现有的基于VAE的方法。最后,我们表明所提出的方法表现出对条件信号中不同类型噪声的固有概括。为了重现性,我们的源代码可在https://github.com/kpandey008/diffusevae上公开获得。
Diffusion probabilistic models have been shown to generate state-of-the-art results on several competitive image synthesis benchmarks but lack a low-dimensional, interpretable latent space, and are slow at generation. On the other hand, standard Variational Autoencoders (VAEs) typically have access to a low-dimensional latent space but exhibit poor sample quality. We present DiffuseVAE, a novel generative framework that integrates VAE within a diffusion model framework, and leverage this to design novel conditional parameterizations for diffusion models. We show that the resulting model equips diffusion models with a low-dimensional VAE inferred latent code which can be used for downstream tasks like controllable synthesis. The proposed method also improves upon the speed vs quality tradeoff exhibited in standard unconditional DDPM/DDIM models (for instance, FID of 16.47 vs 34.36 using a standard DDIM on the CelebA-HQ-128 benchmark using T=10 reverse process steps) without having explicitly trained for such an objective. Furthermore, the proposed model exhibits synthesis quality comparable to state-of-the-art models on standard image synthesis benchmarks like CIFAR-10 and CelebA-64 while outperforming most existing VAE-based methods. Lastly, we show that the proposed method exhibits inherent generalization to different types of noise in the conditioning signal. For reproducibility, our source code is publicly available at https://github.com/kpandey008/DiffuseVAE.