Search Results for author: Avetik Karagulyan

Found 6 papers, 0 papers with code

Applying statistical learning theory to deep learning

no code implementations26 Nov 2023 Cédric Gerbelot, Avetik Karagulyan, Stefani Karp, Kavya Ravichandran, Menachem Stern, Nathan Srebro

Although statistical learning theory provides a robust framework to understand supervised learning, many theoretical aspects of deep learning remain unclear, in particular how different architectures may lead to inductive bias when trained using gradient based methods.

Inductive Bias Learning Theory +1

Langevin Monte Carlo for strongly log-concave distributions: Randomized midpoint revisited

no code implementations14 Jun 2023 Lu Yu, Avetik Karagulyan, Arnak Dalalyan

To provide a more thorough explanation of our method for establishing the computable upper bound, we conduct an analysis of the midpoint discretization for the vanilla Langevin process.

ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression

no code implementations8 Mar 2023 Avetik Karagulyan, Peter Richtárik

Federated sampling algorithms have recently gained great popularity in the community of machine learning and statistics.

Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition

no code implementations1 Jun 2022 Lukang Sun, Avetik Karagulyan, Peter Richtarik

Stein Variational Gradient Descent (SVGD) is an important alternative to the Langevin-type algorithms for sampling from probability distributions of the form $\pi(x) \propto \exp(-V(x))$.

LEMMA

Penalized Langevin dynamics with vanishing penalty for smooth and log-concave targets

no code implementations NeurIPS 2020 Avetik Karagulyan, Arnak S. Dalalyan

An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established.

Cannot find the paper you are looking for? You can Submit a new open access paper.