We study sampling from a target distribution ${\nu_* = e^{-f}}$ using the unadjusted Langevin Monte Carlo (LMC) algorithm. For any potential function $f$ whose tails behave like ${\|x\|^\alpha}$ for ${\alpha \in [1,2]}$, and has $\beta$-H\"older continuous gradient, we prove that ${\widetilde{\mathcal{O}} \Big(d^{\frac{1}{\beta}+\frac{1+\beta}{\beta}(\frac{2}{\alpha} - \boldsymbol{1}_{\{\alpha \neq 1\}})} \epsilon^{-\frac{1}{\beta}}\Big)}$ steps are sufficient to reach the $\epsilon $-neighborhood of a $d$-dimensional target distribution $\nu_*$ in KL-divergence... (read more)
PDFMETHOD | TYPE | |
---|---|---|
🤖 No Methods Found | Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet |