Search Results for author: Samuel Daulton

Found 15 papers, 8 papers with code

Bayesian Optimization of Function Networks with Partial Evaluations

no code implementations3 Nov 2023 Poompol Buathong, Jiayue Wan, Samuel Daulton, Raul Astudillo, Maximilian Balandat, Peter I. Frazier

Recent work has considered Bayesian optimization of function networks (BOFN), where the objective function is computed via a network of functions, each taking as input the output of previous nodes in the network and additional parameters.

Bayesian Optimization

Unexpected Improvements to Expected Improvement for Bayesian Optimization

no code implementations NeurIPS 2023 Sebastian Ament, Samuel Daulton, David Eriksson, Maximilian Balandat, Eytan Bakshy

Expected Improvement (EI) is arguably the most popular acquisition function in Bayesian optimization and has found countless successful applications, but its performance is often exceeded by that of more recent methods.

Bayesian Optimization

Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization

2 code implementations18 Oct 2022 Samuel Daulton, Xingchen Wan, David Eriksson, Maximilian Balandat, Michael A. Osborne, Eytan Bakshy

We prove that under suitable reparameterizations, the BO policy that maximizes the probabilistic objective is the same as that which maximizes the AF, and therefore, PR enjoys the same regret bounds as the original BO policy using the underlying AF.

Bayesian Optimization

Log-Linear-Time Gaussian Processes Using Binary Tree Kernels

1 code implementation4 Oct 2022 Michael K. Cohen, Samuel Daulton, Michael A. Osborne

We present a new kernel that allows for Gaussian process regression in $O((n+m)\log(n+m))$ time.

Gaussian Processes regression

Robust Multi-Objective Bayesian Optimization Under Input Noise

1 code implementation15 Feb 2022 Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, Eytan Bakshy

In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected.

Bayesian Optimization

Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization

no code implementations ICML Workshop AutoML 2021 David Eriksson, Pierce I-Jen Chuang, Samuel Daulton, Peng Xia, Akshat Shrivastava, Arun Babu, Shicong Zhao, Ahmed Aly, Ganesh Venkatesh, Maximilian Balandat

When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy.

Bayesian Optimization Natural Language Understanding +1

Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

1 code implementation NeurIPS 2021 Samuel Daulton, Maximilian Balandat, Eytan Bakshy

We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique.

Bayesian Optimization

Distilled Thompson Sampling: Practical and Efficient Thompson Sampling via Imitation Learning

no code implementations29 Nov 2020 Hongseok Namkoong, Samuel Daulton, Eytan Bakshy

We propose a novel imitation-learning-based algorithm that distills a TS policy into an explicit policy representation by performing posterior inference and optimization offline.

Action Generation Imitation Learning +1

BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

2 code implementations NeurIPS 2020 Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, Eytan Bakshy

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design.

Experimental Design

Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

1 code implementation NeurIPS 2017 Taylor W. Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.

Transfer Learning

Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes

1 code implementation20 Jun 2017 Taylor Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.