no code implementations • 13 Feb 2024 • Jose Pablo Folch, Calvin Tsay, Robert M Lee, Behrang Shafei, Weronika Ormaniec, Andreas Krause, Mark van der Wilk, Ruth Misener, Mojmír Mutný
Bayesian optimization is a methodology to optimize black-box functions.
1 code implementation • 25 Jul 2023 • Manish Prajapat, Mojmír Mutný, Melanie N. Zeilinger, Andreas Krause
In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing returns, i. e., their value decreases in light of similar states visited previously.
no code implementations • 29 Jun 2022 • Mojmír Mutný, Tadeusz Janik, Andreas Krause
A key challenge in science and engineering is to design experiments to learn about some unknown quantity of interest.
no code implementations • 26 May 2022 • Mojmír Mutný, Andreas Krause
In this work, we investigate the optimal design of experiments for {\em estimation of linear functionals in reproducing kernel Hilbert spaces (RKHSs)}.
no code implementations • 22 Oct 2021 • Elvis Nava, Mojmír Mutný, Andreas Krause
In Bayesian Optimization (BO) we study black-box function optimization with noisy point evaluations and Bayesian priors.
1 code implementation • 21 Oct 2021 • Mojmír Mutný, Andreas Krause
We study adaptive sensing of Cox point processes, a widely used model from spatial statistics.
no code implementations • 26 Sep 2021 • Zalán Borsos, Mojmír Mutný, Marco Tagliasacchi, Andreas Krause
We show the effectiveness of our framework for a wide range of models in various settings, including training non-convex models online and batch active learning.
no code implementations • 21 Jan 2021 • Marc Jourdan, Mojmír Mutný, Johannes Kirschner, Andreas Krause
Combinatorial bandits with semi-bandit feedback generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.
1 code implementation • NeurIPS 2020 • Zalán Borsos, Mojmír Mutný, Andreas Krause
Coresets are small data summaries that are sufficient for model training.
no code implementations • 25 Oct 2019 • Mojmír Mutný, Michał Dereziński, Andreas Krause
We analyze the convergence rate of the randomized Newton-like method introduced by Qu et.
2 code implementations • 8 Feb 2019 • Johannes Kirschner, Mojmír Mutný, Nicole Hiller, Rasmus Ischebeck, Andreas Krause
In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently.