Search Results for author: Yuansi Chen

Found 12 papers, 5 papers with code

Prominent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms

no code implementations19 Sep 2023 Keru Wu, Yuansi Chen, Wooseok Ha, Bin Yu

Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to evaluate the model.

Domain Adaptation

When does Metropolized Hamiltonian Monte Carlo provably outperform Metropolis-adjusted Langevin algorithm?

no code implementations10 Apr 2023 Yuansi Chen, Khashayar Gatmiry

We analyze the mixing time of Metropolized Hamiltonian Monte Carlo (HMC) with the leapfrog integrator to sample from a distribution on $\mathbb{R}^d$ whose log-density is smooth, has Lipschitz Hessian in Frobenius norm and satisfies isoperimetry.

A Simple Proof of the Mixing of Metropolis-Adjusted Langevin Algorithm under Smoothness and Isoperimetry

no code implementations8 Apr 2023 Yuansi Chen, Khashayar Gatmiry

We study the mixing time of Metropolis-Adjusted Langevin algorithm (MALA) for sampling a target density on $\mathbb{R}^d$.

Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling

no code implementations27 Sep 2021 Keru Wu, Scott Schmidler, Yuansi Chen

First, for a $d$-dimensional log-concave density with condition number $\kappa$, we show that MALA with a warm start mixes in $\tilde O(\kappa \sqrt{d})$ iterations up to logarithmic factors.

Domain adaptation under structural causal models

1 code implementation29 Oct 2020 Yuansi Chen, Peter Bühlmann

Domain adaptation (DA) arises as an important problem in statistical machine learning when the source data used to train a model is different from the target data used to test the model.

Domain Adaptation

Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients

1 code implementation29 May 2019 Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu

This bound gives a precise quantification of the faster convergence of Metropolized HMC relative to simpler MCMC algorithms such as the Metropolized random walk, or Metropolized Langevin algorithm.

Sampling Can Be Faster Than Optimization

no code implementations20 Nov 2018 Yi-An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, Michael. I. Jordan

Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years.

Stability and Convergence Trade-off of Iterative Optimization Algorithms

no code implementations4 Apr 2018 Yuansi Chen, Chi Jin, Bin Yu

Applying existing stability upper bounds for the gradient methods in our trade-off framework, we obtain lower bounds matching the well-established convergence upper bounds up to constants for these algorithms and conjecture similar lower bounds for NAG and HB.

Log-concave sampling: Metropolis-Hastings algorithms are fast

1 code implementation8 Jan 2018 Raaz Dwivedi, Yuansi Chen, Martin J. Wainwright, Bin Yu

Relative to known guarantees for the unadjusted Langevin algorithm (ULA), our bounds show that the use of an accept-reject step in MALA leads to an exponentially improved dependence on the error-tolerance.

Fast MCMC sampling algorithms on polytopes

2 code implementations23 Oct 2017 Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu

We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope.

Self-calibrating Neural Networks for Dimensionality Reduction

no code implementations11 Dec 2016 Yuansi Chen, Cengiz Pehlevan, Dmitri B. Chklovskii

Here we propose online algorithms where the threshold is self-calibrating based on the singular values computed from the existing observations.

Dimensionality Reduction

Fast and Robust Archetypal Analysis for Representation Learning

1 code implementation CVPR 2014 Yuansi Chen, Julien Mairal, Zaid Harchaoui

We revisit a pioneer unsupervised learning technique called archetypal analysis, which is related to successful data analysis methods such as sparse coding and non-negative matrix factorization.

General Classification Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.