Abstract
Score-based diffusion models have emerged as one of the most promising frameworks for deep generative modelling, due to both their mathematical foundations and their state-of-the art performance in many tasks. Empirically, it has been reported that samplers based on ordinary differential equations (ODEs) are inferior to those based on stochastic differential equations (SDEs). In this article, we systematically analyse the difference between the ODE and SDE dynamics of score-based diffusion models and show how this relates to an associated Fokker-Planck equation. We rigorously describe the full range of dynamics and approximations arising when training score-based diffusion models and derive a theoretical upper bound on the Wasserstein 2-distance between the ODE- and SDE-induced distributions in terms of a Fokker-Planck residual. We also show numerically that conventional score-based diffusion models can exhibit significant differences between ODE- and SDE-induced distributions that we demonstrate using explicit comparisons. Moreover, we show numerically that reducing this Fokker-Planck residual by adding it as an additional regularization term during training closes the gap between ODE- and SDE-induced distributions. Our experiments suggest that this regularization can improve the distribution generated by the ODE; however this can come at the cost of degraded SDE sample quality.This article is part of the theme issue 'Partial differential equations in data science'.