no code implementations • 19 Sep 2023 • Keru Wu, Yuansi Chen, Wooseok Ha, Bin Yu
Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to evaluate the model.
no code implementations • 10 Apr 2023 • Yuansi Chen, Khashayar Gatmiry
We analyze the mixing time of Metropolized Hamiltonian Monte Carlo (HMC) with the leapfrog integrator to sample from a distribution on $\mathbb{R}^d$ whose log-density is smooth, has Lipschitz Hessian in Frobenius norm and satisfies isoperimetry.
no code implementations • 8 Apr 2023 • Yuansi Chen, Khashayar Gatmiry
We study the mixing time of Metropolis-Adjusted Langevin algorithm (MALA) for sampling a target density on $\mathbb{R}^d$.
no code implementations • 27 Sep 2021 • Keru Wu, Scott Schmidler, Yuansi Chen
First, for a $d$-dimensional log-concave density with condition number $\kappa$, we show that MALA with a warm start mixes in $\tilde O(\kappa \sqrt{d})$ iterations up to logarithmic factors.
1 code implementation • 29 Oct 2020 • Yuansi Chen, Peter Bühlmann
Domain adaptation (DA) arises as an important problem in statistical machine learning when the source data used to train a model is different from the target data used to test the model.
1 code implementation • 29 May 2019 • Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu
This bound gives a precise quantification of the faster convergence of Metropolized HMC relative to simpler MCMC algorithms such as the Metropolized random walk, or Metropolized Langevin algorithm.
no code implementations • 20 Nov 2018 • Yi-An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, Michael. I. Jordan
Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years.
no code implementations • 4 Apr 2018 • Yuansi Chen, Chi Jin, Bin Yu
Applying existing stability upper bounds for the gradient methods in our trade-off framework, we obtain lower bounds matching the well-established convergence upper bounds up to constants for these algorithms and conjecture similar lower bounds for NAG and HB.
1 code implementation • 8 Jan 2018 • Raaz Dwivedi, Yuansi Chen, Martin J. Wainwright, Bin Yu
Relative to known guarantees for the unadjusted Langevin algorithm (ULA), our bounds show that the use of an accept-reject step in MALA leads to an exponentially improved dependence on the error-tolerance.
2 code implementations • 23 Oct 2017 • Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu
We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope.
no code implementations • 11 Dec 2016 • Yuansi Chen, Cengiz Pehlevan, Dmitri B. Chklovskii
Here we propose online algorithms where the threshold is self-calibrating based on the singular values computed from the existing observations.
1 code implementation • CVPR 2014 • Yuansi Chen, Julien Mairal, Zaid Harchaoui
We revisit a pioneer unsupervised learning technique called archetypal analysis, which is related to successful data analysis methods such as sparse coding and non-negative matrix factorization.