no code implementations • ICML 2020 • Matthew Hoffman, Yi-An Ma
Variational inference (VI) and Markov chain Monte Carlo (MCMC) are approximate posterior inference algorithms that are often said to have complementary strengths, with VI being fast but biased and MCMC being slower but asymptotically unbiased.
no code implementations • 29 Feb 2024 • Ruijia Niu, Dongxia Wu, Kai Kim, Yi-An Ma, Duncan Watson-Parris, Rose Yu
Multi-fidelity surrogate modeling aims to learn an accurate surrogate at the highest fidelity level by combining data from multiple sources.
no code implementations • 28 Feb 2024 • Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortol, Haorui Wang, Dongxia Wu, Aaron Ferber, Yi-An Ma, Carla P. Gomes, Chao Zhang
To constrain the optimization process to the data manifold, we reformulate the original optimization problem as a sampling problem from the product of the Boltzmann distribution defined by the objective function and the data distribution learned by the diffusion model.
no code implementations • 6 Feb 2024 • Dongxia Wu, Tsuyoshi Idé, Aurélie Lozano, Georgios Kollias, Jiří Navrátil, Naoki Abe, Yi-An Ma, Rose Yu
In particular, we are interested in discovering instance-level causal structures in an unsupervised manner.
no code implementations • 10 Oct 2023 • Sumanth Varambally, Yi-An Ma, Rose Yu
In this work, we relax this assumption and perform causal discovery from time series data originating from a mixture of causal models.
no code implementations • 4 Aug 2023 • Abhishek Roy, Geelon So, Yi-An Ma
But as the set of Pareto optimal vectors can be very large, we further consider a more practically significant Pareto-constrained optimization problem, where the goal is to optimize a preference function constrained to the Pareto set.
no code implementations • 5 Jul 2023 • Xunpeng Huang, Hanze Dong, Yifan Hao, Yi-An Ma, Tong Zhang
We propose a Monte Carlo sampler from the reverse diffusion process.
no code implementations • 15 Jun 2023 • Amin Karbasi, Nikki Lijing Kuang, Yi-An Ma, Siddharth Mitra
Thompson sampling (TS) is widely used in sequential decision making due to its ease of use and appealing empirical performance.
no code implementations • NeurIPS 2023 • Kyurae Kim, Jisu Oh, Kaiwen Wu, Yi-An Ma, Jacob R. Gardner
We provide the first convergence guarantee for full black-box variational inference (BBVI), also known as Monte Carlo variational inference.
no code implementations • 22 Jul 2022 • Kush Bhatia, Nikki Lijing Kuang, Yi-An Ma, Yixin Wang
Focusing on Gaussian inferential models (or variational approximating families) with diagonal plus low-rank precision matrices, we initiate a theoretical study of the trade-offs in two aspects, Bayesian posterior inference error and frequentist uncertainty quantification error.
1 code implementation • 10 Jun 2022 • Dongxia Wu, Matteo Chinazzi, Alessandro Vespignani, Yi-An Ma, Rose Yu
MF-HNP is flexible enough to handle non-nested high dimensional data at different fidelity levels with varying input and output dimensions.
no code implementations • 3 Jun 2022 • Yi-An Ma, Teodor Vanislavov Marinov, Tong Zhang
This paper considers the generalization performance of differentially private convex learning.
no code implementations • 20 Feb 2022 • Ruoqi Shen, Liyao Gao, Yi-An Ma
We demonstrate experimentally that our theoretical results on optimal early stopping time corresponds to the training process of deep neural networks.
no code implementations • 9 Dec 2021 • Wei Deng, Qian Zhang, Yi-An Ma, Zhao Song, Guang Lin
We develop theoretical guarantees for FA-LD for strongly log-concave distributions with non-i. i. d data and study how the injected noise and the stochastic-gradient noise, the heterogeneity of data, and the varying learning rates affect the convergence.
no code implementations • 5 Oct 2021 • Yoav Freund, Yi-An Ma, Tong Zhang
There has been a surge of works bridging MCMC sampling and optimization, with a specific focus on translating non-asymptotic convergence guarantees for optimization problems into the analysis of Langevin algorithms in MCMC sampling.
1 code implementation • 5 Jun 2021 • Dongxia Wu, Ruijia Niu, Matteo Chinazzi, Alessandro Vespignani, Yi-An Ma, Rose Yu
We propose Interactive Neural Process (INP), a deep Bayesian active learning framework for learning deep surrogate models to accelerate stochastic simulations.
1 code implementation • 25 May 2021 • Dongxia Wu, Liyao Gao, Xinyue Xiong, Matteo Chinazzi, Alessandro Vespignani, Yi-An Ma, Rose Yu
Deep learning is gaining increasing popularity for spatiotemporal forecasting.
no code implementations • 12 Feb 2021 • Dongxia Wu, Liyao Gao, Xinyue Xiong, Matteo Chinazzi, Alessandro Vespignani, Yi-An Ma, Rose Yu
We introduce DeepGLEAM, a hybrid model for COVID-19 forecasting.
1 code implementation • ICML 2020 • Michael W. Dusenberry, Ghassen Jerfel, Yeming Wen, Yi-An Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, Dustin Tran
Bayesian neural networks (BNNs) demonstrate promising success in improving the robustness and uncertainty quantification of modern deep learning.
no code implementations • ICML 2020 • Eric Mazumdar, Aldo Pacchiano, Yi-An Ma, Peter L. Bartlett, Michael. I. Jordan
The resulting approximate Thompson sampling algorithm has logarithmic regret and its computational complexity does not scale with the time horizon of the algorithm.
no code implementations • 28 Aug 2019 • Wenlong Mou, Yi-An Ma, Martin J. Wainwright, Peter L. Bartlett, Michael. I. Jordan
We propose a Markov chain Monte Carlo (MCMC) algorithm based on third-order Langevin dynamics for sampling from distributions with log-concave and smooth densities.
no code implementations • 27 Jul 2019 • Kush Bhatia, Yi-An Ma, Anca D. Dragan, Peter L. Bartlett, Michael. I. Jordan
We study the problem of robustly estimating the posterior distribution for the setting where observed data can be contaminated with potentially adversarial outliers.
no code implementations • 4 Feb 2019 • Yi-An Ma, Niladri Chatterji, Xiang Cheng, Nicolas Flammarion, Peter Bartlett, Michael. I. Jordan
We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization on the space of probability measures, with Kullback-Leibler (KL) divergence as the objective functional.
no code implementations • 20 Nov 2018 • Yi-An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, Michael. I. Jordan
Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years.
1 code implementation • 22 Oct 2018 • Christopher Aicher, Yi-An Ma, Nicholas J. Foti, Emily B. Fox
However, inference in SSMs is often computationally prohibitive for long time series.
no code implementations • 5 Jun 2018 • Xin Wang, Fisher Yu, Lisa Dunlap, Yi-An Ma, Ruth Wang, Azalia Mirhoseini, Trevor Darrell, Joseph E. Gonzalez
Larger networks generally have greater representational power at the cost of increased computational complexity.
no code implementations • ICML 2018 • Niladri S. Chatterji, Nicolas Flammarion, Yi-An Ma, Peter L. Bartlett, Michael. I. Jordan
We provide convergence guarantees in Wasserstein distance for a variety of variance-reduction methods: SAGA Langevin diffusion, SVRG Langevin diffusion and control-variate underdamped Langevin diffusion.
no code implementations • 17 Oct 2017 • Felix X. -F. Ye, Yi-An Ma, Hong Qian
Inference in hidden Markov model has been challenging in terms of scalability due to dependencies in the observation data.
no code implementations • ICML 2017 • Yi-An Ma, Nicholas J. Foti, Emily B. Fox
Stochastic gradient MCMC (SG-MCMC) algorithms have proven useful in scaling Bayesian inference to large datasets under an assumption of i. i. d data.
no code implementations • NeurIPS 2015 • Yi-An Ma, Tianqi Chen, Emily B. Fox
That is, any continuous Markov process that provides samples from the target distribution can be written in our framework.