Search Results for author: Maximilian Balandat

Found 13 papers, 7 papers with code

Robust Multi-Objective Bayesian Optimization Under Input Noise

1 code implementation15 Feb 2022 Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, Eytan Bakshy

In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected.

Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs

1 code implementation NeurIPS 2021 Raul Astudillo, Daniel R. Jiang, Maximilian Balandat, Eytan Bakshy, Peter I. Frazier

To overcome the shortcomings of existing approaches, we propose the budgeted multi-step expected improvement, a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous and unknown evaluation costs.

Multi-Objective Bayesian Optimization over High-Dimensional Search Spaces

no code implementations22 Sep 2021 Samuel Daulton, David Eriksson, Maximilian Balandat, Eytan Bakshy

In this work we propose MORBO, a method for multi-objective Bayesian optimization over high-dimensional search spaces.

Bayesian Optimization with High-Dimensional Outputs

2 code implementations NeurIPS 2021 Wesley J. Maddox, Maximilian Balandat, Andrew Gordon Wilson, Eytan Bakshy

However, the Gaussian Process (GP) models typically used as probabilistic surrogates for multi-task Bayesian Optimization scale poorly with the number of outcomes, greatly limiting applicability.

Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization

no code implementations ICML Workshop AutoML 2021 David Eriksson, Pierce I-Jen Chuang, Samuel Daulton, Peng Xia, Akshat Shrivastava, Arun Babu, Shicong Zhao, Ahmed Aly, Ganesh Venkatesh, Maximilian Balandat

When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy.

Natural Language Understanding Neural Architecture Search

Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

1 code implementation NeurIPS 2021 Samuel Daulton, Maximilian Balandat, Eytan Bakshy

We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique.

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

1 code implementation NeurIPS 2020 Shali Jiang, Daniel R. Jiang, Maximilian Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett

In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.

Decision Making

Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization

1 code implementation NeurIPS 2020 Samuel Daulton, Maximilian Balandat, Eytan Bakshy

In many real-world scenarios, decision makers seek to efficiently optimize multiple competing objectives in a sample-efficient fashion.

BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

1 code implementation NeurIPS 2020 Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, Eytan Bakshy

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design.

Bayesian Optimisation Experimental Design

Minimizing Regret on Reflexive Banach Spaces and Nash Equilibria in Continuous Zero-Sum Games

no code implementations NeurIPS 2016 Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen

We study a general adversarial online learning problem, in which we are given a decision set X' in a reflexive Banach space X and a sequence of reward vectors in the dual space of X.

online learning

Minimizing Regret on Reflexive Banach Spaces and Learning Nash Equilibria in Continuous Zero-Sum Games

no code implementations3 Jun 2016 Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen

Under the assumption of uniformly continuous rewards, we obtain explicit anytime regret bounds in a setting where the decision set is the set of probability distributions on a compact metric space $S$ whose Radon-Nikodym derivatives are elements of $L^p(S)$ for some $p > 1$.

online learning

Cannot find the paper you are looking for? You can Submit a new open access paper.