Non Gaussian Denoising Diffusion Models

Eliya Nachmani, Robin San-Roman, Lior Wolf



Generative diffusion processes are an emerging and effective tool for image and speech generation. In the existing methods, the underline noise distribution of the diffusion process is Gaussian noise. However, fitting distributions with more degrees of freedom, could help the performance of such generative models. In this work, we investigate other types of noise distribution for the diffusion process. Specifically, we show that noise from Gamma distribution provides improved results for image and speech generation. Moreover, we show that using a mixture of Gaussian noise variables in the diffusion process improves the performance over a diffusion process that is based on a single distribution. Our approach preserves the ability to efficiently sample state in the training diffusion process while using Gamma noise and a mixture of noise.

Here are some samples from our model for you to listen:

  • Ground Truth - original sample from LJ dataset
  • WaveGrad - sample generated with WaveGrad (noise schedule: 6 - grid search, 25 - Fibonacci, 100 and 1k - linear)
  • Ours Gamma - Generated using our Gamma distribution proposed method
  • Ours Mixture of Gaussian (MoG) - Generated using our 2 mixture of Gaussian proposed method



6 Iteration Samples

Ground Truth WaveGrad Ours - Gamma Ours - MoG






25 Iteration Samples

Ground Truth WaveGrad Ours - Gamma Ours - MoG






100 Iteration Samples

Ground Truth WaveGrad Ours - Gamma Ours - MoG






1000 Iteration Samples

Ground Truth WaveGrad Ours - Gamma Ours - MoG