no code implementations • 26 Nov 2023 • Cédric Gerbelot, Avetik Karagulyan, Stefani Karp, Kavya Ravichandran, Menachem Stern, Nathan Srebro
Although statistical learning theory provides a robust framework to understand supervised learning, many theoretical aspects of deep learning remain unclear, in particular how different architectures may lead to inductive bias when trained using gradient based methods.
no code implementations • 14 Jun 2023 • Lu Yu, Avetik Karagulyan, Arnak Dalalyan
To provide a more thorough explanation of our method for establishing the computable upper bound, we conduct an analysis of the midpoint discretization for the vanilla Langevin process.
no code implementations • 8 Mar 2023 • Avetik Karagulyan, Peter Richtárik
Federated sampling algorithms have recently gained great popularity in the community of machine learning and statistics.
no code implementations • 1 Jun 2022 • Lukang Sun, Avetik Karagulyan, Peter Richtarik
Stein Variational Gradient Descent (SVGD) is an important alternative to the Langevin-type algorithms for sampling from probability distributions of the form $\pi(x) \propto \exp(-V(x))$.
no code implementations • NeurIPS 2020 • Avetik Karagulyan, Arnak S. Dalalyan
An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established.
no code implementations • 20 Jun 2019 • Arnak S. Dalalyan, Avetik Karagulyan, Lionel Riou-Durand
The error of sampling is measured by Wasserstein-$q$ distances.