Search Results for author: Kaiwen Wu

Found 10 papers, 6 papers with code

Large-Scale Gaussian Processes via Alternating Projection

1 code implementation26 Oct 2023 Kaiwen Wu, Jonathan Wenger, Haydn Jones, Geoff Pleiss, Jacob R. Gardner

Training and inference in Gaussian processes (GPs) require solving linear systems with $n\times n$ kernel matrices.

Gaussian Processes Hyperparameter Optimization

The Behavior and Convergence of Local Bayesian Optimization

1 code implementation NeurIPS 2023 Kaiwen Wu, Kyurae Kim, Roman Garnett, Jacob R. Gardner

A recent development in Bayesian optimization is the use of local optimization strategies, which can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies.

Bayesian Optimization

On the Convergence of Black-Box Variational Inference

no code implementations NeurIPS 2023 Kyurae Kim, Jisu Oh, Kaiwen Wu, Yi-An Ma, Jacob R. Gardner

We provide the first convergence guarantee for full black-box variational inference (BBVI), also known as Monte Carlo variational inference.

Bayesian Inference Variational Inference

Practical and Matching Gradient Variance Bounds for Black-Box Variational Bayesian Inference

no code implementations18 Mar 2023 Kyurae Kim, Kaiwen Wu, Jisu Oh, Jacob R. Gardner

Understanding the gradient variance of black-box variational inference (BBVI) is a crucial step for establishing its convergence and developing algorithmic improvements.

Bayesian Inference Variational Inference

Local Bayesian optimization via maximizing probability of descent

1 code implementation21 Oct 2022 Quan Nguyen, Kaiwen Wu, Jacob R. Gardner, Roman Garnett

Local optimization presents a promising approach to expensive, high-dimensional black-box optimization by sidestepping the need to globally explore the search space.

Bayesian Optimization Navigate

Discovering Many Diverse Solutions with Bayesian Optimization

1 code implementation20 Oct 2022 Natalie Maus, Kaiwen Wu, David Eriksson, Jacob Gardner

Bayesian optimization (BO) is a popular approach for sample-efficient optimization of black-box objective functions.

Bayesian Optimization

Stronger and Faster Wasserstein Adversarial Attacks

1 code implementation ICML 2020 Kaiwen Wu, Allen Houze Wang, Yao-Liang Yu

While the majority of existing attacks focus on measuring perturbations under the $\ell_p$ metric, Wasserstein distance, which takes geometry in pixel space into account, has long been known to be a suitable metric for measuring image quality and has recently risen as a compelling alternative to the $\ell_p$ metric in adversarial attacks.

Newton-type Methods for Minimax Optimization

1 code implementation25 Jun 2020 Guojun Zhang, Kaiwen Wu, Pascal Poupart, Yao-Liang Yu

We prove their local convergence at strict local minimax points, which are surrogates of global solutions.

Reinforcement Learning (RL) Vocal Bursts Type Prediction

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

no code implementations26 Jul 2019 Kaiwen Wu, Yao-Liang Yu

Deep models, while being extremely versatile and accurate, are vulnerable to adversarial attacks: slight perturbations that are imperceptible to humans can completely flip the prediction of deep models.

Adversarial Robustness

Distributional Reinforcement Learning for Efficient Exploration

no code implementations13 May 2019 Borislav Mavrin, Shangtong Zhang, Hengshuai Yao, Linglong Kong, Kaiwen Wu, Yao-Liang Yu

In distributional reinforcement learning (RL), the estimated distribution of value function models both the parametric and intrinsic uncertainties.

Atari Games Distributional Reinforcement Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.