Search Results for author: Maximilian Balandat

Found 19 papers, 10 papers with code

Accelerating Look-ahead in Bayesian Optimization: Multilevel Monte Carlo is All you Need

1 code implementation3 Feb 2024 Shangda Yang, Vitaly Zankin, Maximilian Balandat, Stefan Scherer, Kevin Carlberg, Neil Walton, Kody J. H. Law

We leverage multilevel Monte Carlo (MLMC) to improve the performance of multi-step look-ahead Bayesian optimization (BO) methods that involve nested expectations and maximizations.

Bayesian Optimization

Joint Composite Latent Space Bayesian Optimization

no code implementations3 Nov 2023 Natalie Maus, Zhiyuan Jerry Lin, Maximilian Balandat, Eytan Bakshy

To effectively tackle these challenges, we introduce Joint Composite Latent Space Bayesian Optimization (JoCo), a novel framework that jointly trains neural network encoders and probabilistic models to adaptively compress high-dimensional input and output spaces into manageable latent representations.

Bayesian Optimization

Bayesian Optimization of Function Networks with Partial Evaluations

no code implementations3 Nov 2023 Poompol Buathong, Jiayue Wan, Samuel Daulton, Raul Astudillo, Maximilian Balandat, Peter I. Frazier

Recent work has considered Bayesian optimization of function networks (BOFN), where the objective function is computed via a network of functions, each taking as input the output of previous nodes in the network and additional parameters.

Bayesian Optimization

Unexpected Improvements to Expected Improvement for Bayesian Optimization

no code implementations NeurIPS 2023 Sebastian Ament, Samuel Daulton, David Eriksson, Maximilian Balandat, Eytan Bakshy

Expected Improvement (EI) is arguably the most popular acquisition function in Bayesian optimization and has found countless successful applications, but its performance is often exceeded by that of more recent methods.

Bayesian Optimization

Bayesian Optimization over High-Dimensional Combinatorial Spaces via Dictionary-based Embeddings

1 code implementation3 Mar 2023 Aryan Deshwal, Sebastian Ament, Maximilian Balandat, Eytan Bakshy, Janardhan Rao Doppa, David Eriksson

We use Bayesian Optimization (BO) and propose a novel surrogate modeling approach for efficiently handling a large number of binary and categorical parameters.

Bayesian Optimization Vocal Bursts Intensity Prediction

Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization

2 code implementations18 Oct 2022 Samuel Daulton, Xingchen Wan, David Eriksson, Maximilian Balandat, Michael A. Osborne, Eytan Bakshy

We prove that under suitable reparameterizations, the BO policy that maximizes the probabilistic objective is the same as that which maximizes the AF, and therefore, PR enjoys the same regret bounds as the original BO policy using the underlying AF.

Bayesian Optimization

Robust Multi-Objective Bayesian Optimization Under Input Noise

1 code implementation15 Feb 2022 Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, Eytan Bakshy

In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected.

Bayesian Optimization

Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs

1 code implementation NeurIPS 2021 Raul Astudillo, Daniel R. Jiang, Maximilian Balandat, Eytan Bakshy, Peter I. Frazier

To overcome the shortcomings of existing approaches, we propose the budgeted multi-step expected improvement, a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous and unknown evaluation costs.

Bayesian Optimization

Bayesian Optimization with High-Dimensional Outputs

2 code implementations NeurIPS 2021 Wesley J. Maddox, Maximilian Balandat, Andrew Gordon Wilson, Eytan Bakshy

However, the Gaussian Process (GP) models typically used as probabilistic surrogates for multi-task Bayesian Optimization scale poorly with the number of outcomes, greatly limiting applicability.

Bayesian Optimization Vocal Bursts Intensity Prediction

Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization

no code implementations ICML Workshop AutoML 2021 David Eriksson, Pierce I-Jen Chuang, Samuel Daulton, Peng Xia, Akshat Shrivastava, Arun Babu, Shicong Zhao, Ahmed Aly, Ganesh Venkatesh, Maximilian Balandat

When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy.

Bayesian Optimization Natural Language Understanding +1

Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

1 code implementation NeurIPS 2021 Samuel Daulton, Maximilian Balandat, Eytan Bakshy

We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique.

Bayesian Optimization

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

1 code implementation NeurIPS 2020 Shali Jiang, Daniel R. Jiang, Maximilian Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett

In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.

Bayesian Optimization Decision Making

BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

2 code implementations NeurIPS 2020 Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, Eytan Bakshy

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design.

Experimental Design

Minimizing Regret on Reflexive Banach Spaces and Nash Equilibria in Continuous Zero-Sum Games

no code implementations NeurIPS 2016 Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen

We study a general adversarial online learning problem, in which we are given a decision set X' in a reflexive Banach space X and a sequence of reward vectors in the dual space of X.

Minimizing Regret on Reflexive Banach Spaces and Learning Nash Equilibria in Continuous Zero-Sum Games

no code implementations3 Jun 2016 Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen

Under the assumption of uniformly continuous rewards, we obtain explicit anytime regret bounds in a setting where the decision set is the set of probability distributions on a compact metric space $S$ whose Radon-Nikodym derivatives are elements of $L^p(S)$ for some $p > 1$.

Cannot find the paper you are looking for? You can Submit a new open access paper.