In this mini-series of talks, we will survey some recent advances in utilizing advances in machine learning to help tackle challenging tasks in scientific computing, focusing on numerical methods for solving high dimensional partial differential equations and high dimensional sampling problems. In particular, we will discuss theoretical understandings and guarantees for such methods and new challenges arise from the perspective of numerical analysis.
In the first lecture, we will discuss score-based generative modeling (SGM), which is a highly successful approach for learning a probability distribution from data and generating further samples, based on learning the score function (gradient of log-pdf) and then using it to simulate a stochastic differential equation that transforms white noise into the data distribution.
We will talk about some recent results in convergence analysis of SGM and related methods. In particular, we established convergence of SGM applying to any distribution with bounded 2nd moment, relying only on a $L^2$-accurate score estimates, with polynomial dependence on all parameters and no reliance on smoothness or functional inequalities.
Duke University