no code implementations • 3 Nov 2023 • Poompol Buathong, Jiayue Wan, Samuel Daulton, Raul Astudillo, Maximilian Balandat, Peter I. Frazier
Recent work has considered Bayesian optimization of function networks (BOFN), where the objective function is computed via a network of functions, each taking as input the output of previous nodes in the network and additional parameters.
no code implementations • NeurIPS 2023 • Sebastian Ament, Samuel Daulton, David Eriksson, Maximilian Balandat, Eytan Bakshy
Expected Improvement (EI) is arguably the most popular acquisition function in Bayesian optimization and has found countless successful applications, but its performance is often exceeded by that of more recent methods.
2 code implementations • 18 Oct 2022 • Samuel Daulton, Xingchen Wan, David Eriksson, Maximilian Balandat, Michael A. Osborne, Eytan Bakshy
We prove that under suitable reparameterizations, the BO policy that maximizes the probabilistic objective is the same as that which maximizes the AF, and therefore, PR enjoys the same regret bounds as the original BO policy using the underlying AF.
1 code implementation • 4 Oct 2022 • Michael K. Cohen, Samuel Daulton, Michael A. Osborne
We present a new kernel that allows for Gaussian process regression in $O((n+m)\log(n+m))$ time.
1 code implementation • 15 Feb 2022 • Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, Eytan Bakshy
In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected.
no code implementations • 22 Sep 2021 • Samuel Daulton, David Eriksson, Maximilian Balandat, Eytan Bakshy
Many real world scientific and industrial applications require optimizing multiple competing black-box objectives.
no code implementations • ICML Workshop AutoML 2021 • David Eriksson, Pierce I-Jen Chuang, Samuel Daulton, Peng Xia, Akshat Shrivastava, Arun Babu, Shicong Zhao, Ahmed Aly, Ganesh Venkatesh, Maximilian Balandat
When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy.
1 code implementation • NeurIPS 2021 • Samuel Daulton, Maximilian Balandat, Eytan Bakshy
We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique.
no code implementations • 29 Nov 2020 • Hongseok Namkoong, Samuel Daulton, Eytan Bakshy
We propose a novel imitation-learning-based algorithm that distills a TS policy into an explicit policy representation by performing posterior inference and optimization offline.
no code implementations • 22 Oct 2020 • Ryan M. Dreifuerst, Samuel Daulton, Yuchen Qian, Paul Varkey, Maximilian Balandat, Sanjay Kasturia, Anoop Tomar, Ali Yazdan, Vish Ponnampalam, Robert W. Heath
Wireless cellular networks have many parameters that are normally tuned upon deployment and re-tuned as the network changes.
1 code implementation • NeurIPS 2020 • Samuel Daulton, Maximilian Balandat, Eytan Bakshy
In many real-world scenarios, decision makers seek to efficiently optimize multiple competing objectives in a sample-efficient fashion.
no code implementations • 2 Nov 2019 • Samuel Daulton, Shaun Singh, Vashist Avadhanula, Drew Dimmery, Eytan Bakshy
Real-world applications frequently have constraints with respect to a currently deployed policy.
2 code implementations • NeurIPS 2020 • Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, Eytan Bakshy
Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design.
1 code implementation • NeurIPS 2017 • Taylor W. Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez
We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.
1 code implementation • 20 Jun 2017 • Taylor Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez
We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.