Paper

On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness

We study sampling from a target distribution ${\nu_* = e^{-f}}$ using the unadjusted Langevin Monte Carlo (LMC) algorithm. For any potential function $f$ whose tails behave like ${\|x\|^\alpha}$ for ${\alpha \in [1,2]}$, and has $\beta$-H\"older continuous gradient, we prove that ${\widetilde{\mathcal{O}} \Big(d^{\frac{1}{\beta}+\frac{1+\beta}{\beta}(\frac{2}{\alpha} - \boldsymbol{1}_{\{\alpha \neq 1\}})} \epsilon^{-\frac{1}{\beta}}\Big)}$ steps are sufficient to reach the $\epsilon $-neighborhood of a $d$-dimensional target distribution $\nu_*$ in KL-divergence. This convergence rate, in terms of $\epsilon$ dependency, is not directly influenced by the tail growth rate $\alpha$ of the potential function as long as its growth is at least linear, and it only relies on the order of smoothness $\beta$. One notable consequence of this result is that for potentials with Lipschitz gradient, i.e. $\beta=1$, our rate recovers the best known rate ${\widetilde{\mathcal{O}}(d\epsilon^{-1})}$ which was established for strongly convex potentials in terms of $\epsilon$ dependency, but we show that the same rate is achievable for a wider class of potentials that are degenerately convex at infinity. The growth rate $\alpha$ starts to have an effect on the established rate in high dimensions where $d$ is large; furthermore, it recovers the best-known dimension dependency when the tail growth of the potential is quadratic, i.e. ${\alpha = 2}$, in the current setup. Our framework allows for finite perturbations, and any order of smoothness ${\beta\in(0,1]}$; consequently, our results are applicable to a wide class of non-convex potentials that are weakly smooth and exhibit at least linear tail growth.

Results in Papers With Code
(↓ scroll down to see all results)