论文标题

剥离扩散隐式模型

Denoising Diffusion Implicit Models

论文作者

Song, Jiaming, Meng, Chenlin, Ermon, Stefano

论文摘要

denoising扩散概率模型(DDPM)在没有对抗训练的情况下实现了高质量的图像产生,但它们需要为许多步骤模拟马尔可夫链以生成样本。为了加速采样,我们提出了DeNoising扩散隐式模型(DDIMS),这是一种更有效的迭代隐式概率模型,具有与DDPM相同的训练程序。在DDPM中,生成过程被定义为马尔可夫扩散过程的相反。我们构建了一类非马克维亚扩散过程,这些过程导致相同的训练目标,但其反向过程可以更快地进行采样。 We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.

Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源