Search Results for author: Peter I. Frazier

Found 30 papers, 7 papers with code

Thinking inside the box: A tutorial on grey-box Bayesian optimization

no code implementations2 Jan 2022 Raul Astudillo, Peter I. Frazier

However, internal information about objective function computation is often available.

Bayesian Optimization of Function Networks

1 code implementation NeurIPS 2021 Raul Astudillo, Peter I. Frazier

We consider Bayesian optimization of the output of a network of functions, where each function takes as input the output of its parent nodes, and where the network takes significant time to evaluate.

Gaussian Processes

Two-step Lookahead Bayesian Optimization with Inequality Constraints

no code implementations6 Dec 2021 Yunxiang Zhang, Xiangyu Zhang, Peter I. Frazier

Recent advances in computationally efficient non-myopic Bayesian optimization (BO) improve query efficiency over traditional myopic methods like expected improvement while only modestly increasing computational cost.

Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs

1 code implementation NeurIPS 2021 Raul Astudillo, Daniel R. Jiang, Maximilian Balandat, Eytan Bakshy, Peter I. Frazier

To overcome the shortcomings of existing approaches, we propose the budgeted multi-step expected improvement, a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous and unknown evaluation costs.

Restless Bandits with Many Arms: Beating the Central Limit Theorem

no code implementations25 Jul 2021 Xiangyu Zhang, Peter I. Frazier

Thus, there is substantial value in understanding the performance of index policies and other policies that can be computed efficiently for large $N$.

Active Learning Recommendation Systems

Multi-Attribute Bayesian Optimization With Interactive Preference Learning

no code implementations14 Nov 2019 Raul Astudillo, Peter I. Frazier

The outcome of our approach is a menu of designs and evaluated attributes from which the DM makes a final selection.

Bayesian Optimization of Composite Functions

no code implementations4 Jun 2019 Raul Astudillo, Peter I. Frazier

We consider optimization of composite objective functions, i. e., of the form $f(x)=g(h(x))$, where $h$ is a black-box derivative-free expensive-to-evaluate function with vector-valued outputs, and $g$ is a cheap-to-evaluate real-valued function.

Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning

no code implementations12 Mar 2019 Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, Andrew Gordon Wilson

Nonetheless, for hyperparameter tuning in deep neural networks, the time required to evaluate the validation error for even a few hyperparameter settings remains a bottleneck.

A Tutorial on Bayesian Optimization

6 code implementations8 Jul 2018 Peter I. Frazier

It builds a surrogate for the objective and quantifies the uncertainty in that surrogate using a Bayesian machine learning technique, Gaussian process regression, and then uses an acquisition function defined from this surrogate to decide where to sample.

Hyperparameter Optimization

Bayesian Optimization with Expensive Integrands

1 code implementation23 Mar 2018 Saul Toscano-Palmerin, Peter I. Frazier

We propose a Bayesian optimization algorithm for objective functions that are sums or integrals of expensive-to-evaluate functions, allowing noisy evaluations.

Continuous-fidelity Bayesian Optimization with Knowledge Gradient

no code implementations ICLR 2018 Jian Wu, Peter I. Frazier

While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multi-fidelity BO (e. g. Klein et al. (2017)) that exploit cheap approximations, e. g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that use only full-fidelity observations.

Discretization-free Knowledge Gradient Methods for Bayesian Optimization

no code implementations20 Jul 2017 Jian Wu, Peter I. Frazier

This paper studies Bayesian ranking and selection (R&S) problems with correlated prior beliefs and continuous domains, i. e. Bayesian optimization (BO).

Dueling Bandits With Weak Regret

no code implementations ICML 2017 Bangrui Chen, Peter I. Frazier

We consider online content recommendation with implicit feedback through pairwise comparisons, formalized as the so-called dueling bandit problem.

Bayesian Optimization with Gradients

1 code implementation NeurIPS 2017 Jian Wu, Matthias Poloczek, Andrew Gordon Wilson, Peter I. Frazier

Bayesian optimization has been successful at global optimization of expensive-to-evaluate multimodal objective functions.

Bayes-Optimal Entropy Pursuit for Active Choice-Based Preference Learning

no code implementations24 Feb 2017 Stephen N. Pallone, Peter I. Frazier, Shane G. Henderson

Under certain noise assumptions, we show that the Bayes-optimal policy for maximally reducing entropy of the posterior distribution of this linear classifier is a greedy policy, and that this policy achieves a linear lower bound when alternatives can be constructed from the continuum.

Active Learning

Probabilistic Bisection Converges Almost as Quickly as Stochastic Approximation

no code implementations12 Dec 2016 Peter I. Frazier, Shane G. Henderson, Rolf Waeber

The probabilistic bisection algorithm (PBA) solves a class of stochastic root-finding problems in one dimension by successively updating a prior belief on the location of the root based on noisy responses to queries at chosen points.

Warm Starting Bayesian Optimization

no code implementations11 Aug 2016 Matthias Poloczek, Jialei Wang, Peter I. Frazier

We develop a framework for warm-starting Bayesian optimization, that reduces the solution time required to solve an optimization problem that is one in a sequence of related problems.

Multi-Step Bayesian Optimization for One-Dimensional Feasibility Determination

no code implementations11 Jul 2016 J. Massey Cashore, Lemuel Kumarga, Peter I. Frazier

Bayesian optimization methods allocate limited sampling budgets to maximize expensive-to-evaluate functions.

The Parallel Knowledge Gradient Method for Batch Bayesian Optimization

2 code implementations NeurIPS 2016 Jian Wu, Peter I. Frazier

In many applications of black-box optimization, one can evaluate multiple points simultaneously, e. g. when evaluating the performances of several different neural network architectures in a parallel computing environment.

The Bayesian Linear Information Filtering Problem

no code implementations30 May 2016 Bangrui Chen, Peter I. Frazier

We present a Bayesian sequential decision-making formulation of the information filtering problem, in which an algorithm presents items (news articles, scientific papers, tweets) arriving in a stream, and learns relevance from user feedback on presented items.

Decision Making

Dueling Bandits with Dependent Arms

no code implementations28 May 2016 Bangrui Chen, Peter I. Frazier

We study dueling bandits with weak utility-based regret when preferences over arms have a total order and carry observable feature vectors.

Multi-Information Source Optimization

no code implementations NeurIPS 2017 Matthias Poloczek, Jialei Wang, Peter I. Frazier

We consider Bayesian optimization of an expensive-to-evaluate black-box objective function, where we also have access to cheaper approximations of the objective.

Parallel Bayesian Global Optimization of Expensive Functions

no code implementations16 Feb 2016 Jialei Wang, Scott C. Clark, Eric Liu, Peter I. Frazier

We also show that the resulting one-step Bayes optimal algorithm for parallel global optimization finds high-quality solutions with fewer evaluations than a heuristic based on approximately maximizing the q-EI.

Stratified Bayesian Optimization

no code implementations7 Feb 2016 Saul Toscano-Palmerin, Peter I. Frazier

We consider derivative-free black-box global optimization of expensive noisy functions, when most of the randomness in the objective is produced by a few influential scalar random inputs.

Bayes-Optimal Effort Allocation in Crowdsourcing: Bounds and Index Policies

no code implementations31 Dec 2015 Weici Hu, Peter I. Frazier

We consider effort allocation in crowdsourcing, where we wish to assign labeling tasks to imperfect homogeneous crowd workers to maximize overall accuracy in a continuous-time Bayesian setting, subject to budget and time constraints.

Bayesian optimization for materials design

1 code implementation3 Jun 2015 Peter I. Frazier, Jialei Wang

We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets.

A Markov Decision Process Analysis of the Cold Start Problem in Bayesian Information Filtering

no code implementations29 Oct 2014 Xiaoting Zhao, Peter I. Frazier

We focus on the cold-start setting for this problem, in which we have limited historical data on the user's preferences, and must rely on feedback from forwarded articles to learn which the fraction of items relevant to the user in each of several item categories.

Probabilistic Group Testing under Sum Observations: A Parallelizable 2-Approximation for Entropy Loss

no code implementations16 Jul 2014 Weidong Han, Purnima Rajan, Peter I. Frazier, Bruno M. Jedynak

We consider the problem of group testing with sum observations and noiseless answers, in which we aim to locate multiple objects by querying the number of objects in each of a sequence of chosen sets.

A New Optimal Stepsize For Approximate Dynamic Programming

no code implementations10 Jul 2014 Ilya O. Ryzhov, Peter I. Frazier, Warren B. Powell

Approximate dynamic programming (ADP) has proven itself in a wide range of applications spanning large-scale transportation problems, health care, revenue management, and energy systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.