1 code implementation • 3 Feb 2024 • Carl Hvarfner, Erik Orm Hellsten, Luigi Nardi
High-dimensional problems have long been considered the Achilles' heel of Bayesian optimization algorithms.
1 code implementation • 24 Nov 2023 • Carl Hvarfner, Frank Hutter, Luigi Nardi
The optimization of expensive-to-evaluate black-box functions is prevalent in various scientific disciplines.
1 code implementation • 5 Oct 2023 • Erik Orm Hellsten, Carl Hvarfner, Leonard Papenmeier, Luigi Nardi
We propose a group testing approach to identify active variables to facilitate efficient optimization in these domains.
2 code implementations • NeurIPS 2023 • Neeratyoy Mallik, Edward Bergman, Carl Hvarfner, Danny Stoll, Maciej Janowski, Marius Lindauer, Luigi Nardi, Frank Hutter
Hyperparameters of Deep Learning (DL) pipelines are crucial for their downstream performance.
no code implementations • NeurIPS 2023 • Carl Hvarfner, Erik Hellsten, Frank Hutter, Luigi Nardi
Gaussian processes are the model of choice in Bayesian optimization and active learning.
2 code implementations • 9 Jun 2022 • Carl Hvarfner, Frank Hutter, Luigi Nardi
As a light-weight approach with superior results, JES provides a new go-to acquisition function for Bayesian optimization.
1 code implementation • 23 Apr 2022 • Carl Hvarfner, Danny Stoll, Artur Souza, Marius Lindauer, Frank Hutter, Luigi Nardi
To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user.
no code implementations • ICLR 2022 • Carl Hvarfner, Danny Stoll, Artur Souza, Luigi Nardi, Marius Lindauer, Frank Hutter
To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user.