1 code implementation • 26 Oct 2023 • Kaiwen Wu, Jonathan Wenger, Haydn Jones, Geoff Pleiss, Jacob R. Gardner
Training and inference in Gaussian processes (GPs) require solving linear systems with $n\times n$ kernel matrices.
1 code implementation • NeurIPS 2023 • Kaiwen Wu, Kyurae Kim, Roman Garnett, Jacob R. Gardner
A recent development in Bayesian optimization is the use of local optimization strategies, which can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies.
no code implementations • NeurIPS 2023 • Kyurae Kim, Jisu Oh, Kaiwen Wu, Yi-An Ma, Jacob R. Gardner
We provide the first convergence guarantee for full black-box variational inference (BBVI), also known as Monte Carlo variational inference.
no code implementations • 18 Mar 2023 • Kyurae Kim, Kaiwen Wu, Jisu Oh, Jacob R. Gardner
Understanding the gradient variance of black-box variational inference (BBVI) is a crucial step for establishing its convergence and developing algorithmic improvements.
1 code implementation • 21 Oct 2022 • Quan Nguyen, Kaiwen Wu, Jacob R. Gardner, Roman Garnett
Local optimization presents a promising approach to expensive, high-dimensional black-box optimization by sidestepping the need to globally explore the search space.
1 code implementation • 20 Oct 2022 • Natalie Maus, Kaiwen Wu, David Eriksson, Jacob Gardner
Bayesian optimization (BO) is a popular approach for sample-efficient optimization of black-box objective functions.
1 code implementation • ICML 2020 • Kaiwen Wu, Allen Houze Wang, Yao-Liang Yu
While the majority of existing attacks focus on measuring perturbations under the $\ell_p$ metric, Wasserstein distance, which takes geometry in pixel space into account, has long been known to be a suitable metric for measuring image quality and has recently risen as a compelling alternative to the $\ell_p$ metric in adversarial attacks.
1 code implementation • 25 Jun 2020 • Guojun Zhang, Kaiwen Wu, Pascal Poupart, Yao-Liang Yu
We prove their local convergence at strict local minimax points, which are surrogates of global solutions.
no code implementations • 26 Jul 2019 • Kaiwen Wu, Yao-Liang Yu
Deep models, while being extremely versatile and accurate, are vulnerable to adversarial attacks: slight perturbations that are imperceptible to humans can completely flip the prediction of deep models.
no code implementations • 13 May 2019 • Borislav Mavrin, Shangtong Zhang, Hengshuai Yao, Linglong Kong, Kaiwen Wu, Yao-Liang Yu
In distributional reinforcement learning (RL), the estimated distribution of value function models both the parametric and intrinsic uncertainties.