1 code implementation • 31 Oct 2024 • Kacper Wyrwal, Andreas Krause, Viacheslav Borovitskiy
We propose practical deep Gaussian process models on Riemannian manifolds, similar in spirit to residual neural networks.
1 code implementation • 17 Oct 2024 • Patrik Okanovic, Andreas Kirsch, Jannes Kasper, Torsten Hoefler, Andreas Krause, Nezihe Merve Gürel
We introduce MODEL SELECTOR, a framework for label-efficient selection of pretrained classifiers.
no code implementations • 12 Oct 2024 • Yarden As, Bhavya Sukhija, Lenart Treven, Carmelo Sferrazza, Stelian Coros, Andreas Krause
Under regularity assumptions on the constraints and dynamics, we show that ActSafe guarantees safety during learning while also obtaining a near-optimal policy in finite time.
1 code implementation • 10 Oct 2024 • Jonas Hübotter, Sascha Bongni, Ido Hakimi, Andreas Krause
To address this, we introduce SIFT, a data selection algorithm designed to reduce uncertainty about the model's response given a prompt, which unifies ideas from retrieval and active learning.
Ranked #1 on Language Modelling on The Pile
no code implementations • 8 Oct 2024 • Ali Gorji, Andisheh Amrollahi, Andreas Krause
Our algorithm's first step harnesses recent results showing that many real-world predictors have a spectral bias that allows us to either exactly represent (in the case of ensembles of decision trees), or efficiently approximate them (in the case of neural networks) using a compact Fourier representation.
no code implementations • 7 Oct 2024 • Marco Bagatella, Jonas Hübotter, Georg Martius, Andreas Krause
We study this multi-task problem and explore an interactive framework in which the agent adaptively selects the tasks to be demonstrated.
no code implementations • 27 Sep 2024 • Melis Ilayda Bal, Pier Giuseppe Sessa, Mojmir Mutny, Andreas Krause
Crucially, this allows us to efficiently break down the complexity of the combinatorial domain into individual decision sets, making $\textbf{GameOpt}$ scalable to large combinatorial spaces.
1 code implementation • 13 Sep 2024 • Manish Prajapat, Amon Lahr, Johannes Köhler, Andreas Krause, Melanie N. Zeilinger
Learning uncertain dynamics models using Gaussian process~(GP) regression has been demonstrated to enable high-performance and safety-aware control strategies for challenging real-world applications.
no code implementations • 18 Aug 2024 • Marco Bagatella, Andreas Krause, Georg Martius
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning, as it allows describing objectives beyond the expressivity of conventional discounted return formulations.
no code implementations • 18 Jul 2024 • Riccardo De Santi, Federico Arangath Joseph, Noah Liniger, Mirco Mutti, Andreas Krause
To achieve this, we bridge AE and MDP homomorphisms, which offer a way to exploit known geometric structures via abstraction.
no code implementations • 13 Jul 2024 • Riccardo De Santi, Manish Prajapat, Andreas Krause
In classic Reinforcement Learning (RL), the agent maximizes an additive objective of the visited states, e. g., a value function.
1 code implementation • ICML Workshop on Aligning Reinforcement Learning Experimentalists and Theorists 2024 • Jonas Hübotter, Bhavya Sukhija, Lenart Treven, Yarden As, Andreas Krause
We analyze Safe BO under the lens of a generalization of active learning with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region.
1 code implementation • 24 Jun 2024 • Barna Pásztor, Parnian Kassraie, Andreas Krause
Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries.
no code implementations • 17 Jun 2024 • Weronika Ormaniec, Scott Sussex, Lars Lorch, Bernhard Schölkopf, Andreas Krause
Moreover, contrary to the post-hoc standardization of data generated by standard SCMs, we prove that linear iSCMs are less identifiable from prior knowledge on the weights and do not collapse to deterministic relationships in large systems, which may make iSCMs a useful model in causal inference beyond the benchmarking problem studied here.
no code implementations • 6 Jun 2024 • Omar G. Younis, Luca Corinzia, Ioannis N. Athanasiadis, Andreas Krause, Joachim M. Buhmann, Matteo Turchetta
Crop breeding is crucial in improving agricultural productivity while potentially decreasing land usage, greenhouse gas emissions, and water consumption.
no code implementations • 3 Jun 2024 • Vinzenz Thoma, Barna Pasztor, Andreas Krause, Giorgia Ramponi, Yifan Hu
In various applications, the optimal policy in a strategic decision-making problem depends both on the environmental configuration and exogenous events.
1 code implementation • 3 Jun 2024 • Lenart Treven, Bhavya Sukhija, Yarden As, Florian Dörfler, Andreas Krause
Finally, we propose OTaCoS, an efficient model-based algorithm for our setting.
no code implementations • 3 Jun 2024 • Bhavya Sukhija, Lenart Treven, Florian Dörfler, Stelian Coros, Andreas Krause
We study the problem of nonepisodic reinforcement learning (RL) for nonlinear dynamical systems, where the system dynamics are unknown and the RL agent has to learn from a single trajectory, i. e., without resets.
no code implementations • 9 May 2024 • Yarden As, Bhavya Sukhija, Andreas Krause
A major challenge in deploying reinforcement learning in online tasks is ensuring that safety is maintained throughout the learning process.
1 code implementation • 26 Mar 2024 • Mahrokh Ghoddousi Boroujeni, Clara Lucía Galimberti, Andreas Krause, Giancarlo Ferrari-Trecate
Based on these bounds, we propose a new method for designing optimal controllers, offering a principled way to incorporate prior knowledge into the synthesis process, which aids in improving the control policy and mitigating overfitting.
no code implementations • 25 Mar 2024 • Jonas Rothfuss, Bhavya Sukhija, Lenart Treven, Florian Dörfler, Stelian Coros, Andreas Krause
We present SIM-FSVGD for learning robot dynamics from data.
no code implementations • 13 Feb 2024 • Jose Pablo Folch, Calvin Tsay, Robert M Lee, Behrang Shafei, Weronika Ormaniec, Andreas Krause, Mark van der Wilk, Ruth Misener, Mojmír Mutný
This is a parallel to the optimization of an acquisition function in policy space.
2 code implementations • 13 Feb 2024 • Jonas Hübotter, Bhavya Sukhija, Lenart Treven, Yarden As, Andreas Krause
We study a generalization of classical active learning to real-world settings with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region.
no code implementations • 13 Feb 2024 • Jonas Hübotter, Bhavya Sukhija, Lenart Treven, Yarden As, Andreas Krause
We study the question: How can we select the right data for fine-tuning to a specific task?
1 code implementation • 9 Feb 2024 • Manish Prajapat, Johannes Köhler, Matteo Turchetta, Andreas Krause, Melanie N. Zeilinger
Based on this framework we propose an efficient algorithm, SageMPC, SAfe Guaranteed Exploration using Model Predictive Control.
1 code implementation • 8 Feb 2024 • Jiawei Huang, Niao He, Andreas Krause
We study the sample complexity of reinforcement learning (RL) in Mean-Field Games (MFGs) with model-based function approximation that requires strategic exploration to find a Nash Equilibrium policy.
no code implementations • 16 Jan 2024 • Mahrokh Ghoddousi Boroujeni, Andreas Krause, Giancarlo Ferrari Trecate
Personalized federated learning (PFL) goes one step further by adapting the global model to each client, enhancing the model's fit for different clients.
1 code implementation • 13 Dec 2023 • Puck van Gerwen, Ksenia R. Briling, Charlotte Bunne, Vignesh Ram Somnath, Ruben Laplaza, Andreas Krause, Clemence Corminboeuf
We show that, compared to existing models for reaction property prediction, 3DReact offers a flexible framework that exploits atom-mapping information, if available, as well as geometries of reactants and products (in an invariant or equivariant fashion).
no code implementations • 28 Nov 2023 • Mohammad Reza Karimi, Ya-Ping Hsieh, Andreas Krause
Many problems in machine learning can be formulated as solving entropy-regularized optimal transport on the space of probability measures.
no code implementations • 13 Nov 2023 • Arjun Bhardwaj, Jonas Rothfuss, Bhavya Sukhija, Yarden As, Marco Hutter, Stelian Coros, Andreas Krause
We introduce PACOH-RL, a novel model-based Meta-Reinforcement Learning (Meta-RL) algorithm designed to efficiently adapt control policies to changing dynamics.
1 code implementation • NeurIPS 2023 • Bernardo Fichera, Viacheslav Borovitskiy, Andreas Krause, Aude Billard
Gaussian process regression is widely used because of its ability to provide well-calibrated uncertainty estimates and handle small or sparse datasets.
1 code implementation • 28 Oct 2023 • Daniel Robert-Nicoud, Andreas Krause, Viacheslav Borovitskiy
Various applications ranging from robotics to climate science require modeling signals on non-Euclidean domains, such as the sphere.
1 code implementation • 26 Oct 2023 • Lars Lorch, Andreas Krause, Bernhard Schölkopf
We develop a novel approach towards causal inference.
no code implementations • 9 Oct 2023 • Vignesh Ram Somnath, Pier Giuseppe Sessa, Maria Rodriguez Martinez, Andreas Krause
Most traditional and deep learning methods for docking have focused mainly on binary docking, following either a search-based, regression-based, or generative modeling paradigm.
no code implementations • 5 Sep 2023 • Shyam Sundhar Ramesh, Pier Giuseppe Sessa, Yifan Hu, Andreas Krause, Ilija Bogunovic
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
no code implementations • 31 Jul 2023 • Scott Sussex, Pier Giuseppe Sessa, Anastasiia Makarova, Andreas Krause
We formalize this generalization of CBO as Adversarial Causal Bayesian Optimization (ACBO) and introduce the first algorithm for ACBO with bounded regret: Causal Bayesian Optimization with Multiplicative Weights (CBO-MW).
1 code implementation • 25 Jul 2023 • Manish Prajapat, Mojmír Mutný, Melanie N. Zeilinger, Andreas Krause
In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing returns, i. e., their value decreases in light of similar states visited previously.
1 code implementation • NeurIPS 2023 • Parnian Kassraie, Nicolas Emmenegger, Andreas Krause, Aldo Pacchiano
This allows us to develop ALEXP, which has an exponentially improved ($\log M$) dependence on $M$ for its regret.
1 code implementation • 29 Jun 2023 • Matej Jusup, Barna Pásztor, Tadeusz Janik, Kenan Zhang, Francesco Corman, Andreas Krause, Ilija Bogunovic
Many applications, e. g., in shared mobility, require coordinating a large number of agents.
no code implementations • 23 Jun 2023 • Christopher Koenig, Miks Ozols, Anastasia Makarova, Efe C. Balta, Andreas Krause, Alisa Rupenyan
Controller tuning and parameter optimization are crucial in system design to improve both the controller and underlying system performance.
1 code implementation • 15 Jun 2023 • Matteo Pariset, Ya-Ping Hsieh, Charlotte Bunne, Andreas Krause, Valentin De Bortoli
Schr\"odinger bridges (SBs) provide an elegant framework for modeling the temporal evolution of populations in physical, chemical, or biological systems.
no code implementations • 13 Jun 2023 • Pragnya Alatur, Giorgia Ramponi, Niao He, Andreas Krause
Multi-agent reinforcement learning (MARL) addresses sequential decision-making problems with multiple agents, where each agent optimizes its own objective.
1 code implementation • 12 Jun 2023 • Daniel Widmer, Dongho Kang, Bhavya Sukhija, Jonas Hübotter, Andreas Krause, Stelian Coros
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
1 code implementation • 25 May 2023 • David Lindner, Xin Chen, Sebastian Tschiatschek, Katja Hofmann, Andreas Krause
We evaluate CoCoRL in gridworld environments and a driving simulation with multiple constraints.
no code implementations • 16 May 2023 • Ali Gorji, Andisheh Amrollahi, Andreas Krause
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
no code implementations • 9 May 2023 • Yunke Ao, Hooman Esfandiari, Fabio Carrillo, Yarden As, Mazda Farshad, Benjamin F. Grewe, Andreas Krause, Philipp Fuernstahl
Spinal fusion surgery requires highly accurate implantation of pedicle screw implants, which must be conducted in critical proximity to vital structures with a limited view of anatomy.
1 code implementation • 2 Mar 2023 • Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie, Andreas Krause
We study the problem of conservative off-policy evaluation (COPE) where given an offline dataset of environment interactions, collected by other agents, we seek to obtain a (tight) lower bound on a policy's performance.
2 code implementations • 22 Feb 2023 • Vignesh Ram Somnath, Matteo Pariset, Ya-Ping Hsieh, Maria Rodriguez Martinez, Andreas Krause, Charlotte Bunne
Diffusion Schr\"odinger bridges (DSB) have recently emerged as a powerful framework for recovering stochastic dynamics via their marginal observations at different time points.
no code implementations • 7 Feb 2023 • Johannes Kirschner, Tor Lattimore, Andreas Krause
Partial monitoring is an expressive framework for sequential decision-making with an abundance of applications, including graph-structured and dueling bandits, dynamic pricing and transductive feedback models.
no code implementations • 19 Dec 2022 • Xiang Li, Viraj Mehta, Johannes Kirschner, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, Ilija Bogunovic
Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces.
1 code implementation • 18 Nov 2022 • Scott Sussex, Anastasiia Makarova, Andreas Krause
How should we intervene on an unknown structural equation model to maximize a downstream variable of interest?
no code implementations • 14 Nov 2022 • Jonas Rothfuss, Martin Josifoski, Vincent Fortuin, Andreas Krause
Meta-Learning aims to speed up the learning process on new tasks by acquiring useful inductive biases from datasets of related learning tasks.
3 code implementations • 3 Nov 2022 • Viacheslav Borovitskiy, Mohammad Reza Karimi, Vignesh Ram Somnath, Andreas Krause
We propose a principled way to define Gaussian process priors on various sets of unweighted graphs: directed or undirected, with or without loops.
no code implementations • 2 Nov 2022 • Songyan Hou, Parnian Kassraie, Anastasis Kratsios, Andreas Krause, Jonas Rothfuss
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
no code implementations • 27 Oct 2022 • Felix Schur, Parnian Kassraie, Jonas Rothfuss, Andreas Krause
Our algorithm can be paired with any kernelized or linear bandit algorithm and guarantees oracle optimal performance, meaning that as more tasks are solved, the regret of LIBO on each task converges to the regret of the bandit algorithm with oracle knowledge of the true kernel.
1 code implementation • 26 Oct 2022 • Jonas Gehring, Deepak Gopinath, Jungdam Won, Andreas Krause, Gabriel Synnaeve, Nicolas Usunier
Starting with a learned joint latent space, we separately train a generative model of demonstration sequences and an accompanying low-level policy.
no code implementations • NeurIPS 2023 • Mohammad Reza Karimi, Ya-Ping Hsieh, Andreas Krause
Non-convex sampling is a key challenge in machine learning, central to non-convex optimization in deep learning as well as to approximate probabilistic inference.
1 code implementation • 24 Oct 2022 • Krunoslav Lehman Pavasovic, Jonas Rothfuss, Andreas Krause
To circumvent these issues, we approach meta-learning through the lens of functional Bayesian neural network inference, which views the prior as a stochastic process and performs inference in the function space.
no code implementations • 14 Oct 2022 • Shyam Sundhar Ramesh, Pier Giuseppe Sessa, Andreas Krause, Ilija Bogunovic
Contextual Bayesian optimization (CBO) is a powerful framework for sequential decision-making given side information, with important applications, e. g., in wind energy systems.
1 code implementation • 12 Oct 2022 • Manish Prajapat, Matteo Turchetta, Melanie N. Zeilinger, Andreas Krause
In this paper, we aim to efficiently learn the density to approximately solve the coverage problem while preserving the agents' safety.
no code implementations • 4 Oct 2022 • Hossein Esfandiari, Alkis Kalavasis, Amin Karbasi, Andreas Krause, Vahab Mirrokni, Grigoris Velegkas
Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter.
no code implementations • 3 Oct 2022 • Jonas Rothfuss, Christopher Koenig, Alisa Rupenyan, Andreas Krause
In the presence of unknown safety constraints, it is crucial to choose reliable model hyper-parameters to avoid safety violations.
2 code implementations • 21 Jul 2022 • Ilnura Usmanova, Yarden As, Maryam Kamgarpour, Andreas Krause
We introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial.
1 code implementation • 18 Jul 2022 • David Lindner, Andreas Krause, Giorgia Ramponi
We propose a novel IRL algorithm: Active exploration for Inverse Reinforcement Learning (AceIRL), which actively explores an unknown environment and expert policy to quickly learn the expert's reward function and identify a good policy.
no code implementations • 13 Jul 2022 • Parnian Kassraie, Andreas Krause, Ilija Bogunovic
By establishing a novel connection between such kernels and the graph neural tangent kernel (GNTK), we introduce the first GNN confidence bound and use it to design a phased-elimination algorithm with sublinear regret.
no code implementations • 4 Jul 2022 • Sebastian Curi, Armin Lederer, Sandra Hirche, Andreas Krause
Ensuring safety is a crucial challenge when deploying reinforcement learning (RL) to real-world systems.
no code implementations • 29 Jun 2022 • Mojmír Mutný, Tadeusz Janik, Andreas Krause
A key challenge in science and engineering is to design experiments to learn about some unknown quantity of interest.
1 code implementation • 28 Jun 2022 • Charlotte Bunne, Andreas Krause, Marco Cuturi
To account for that context in OT estimation, we introduce CondOT, a multi-task approach to estimate a family of OT maps conditioned on a context variable, using several pairs of measures $\left(\mu_i, \nu_i\right)$ tagged with a context label $c_i$.
no code implementations • 27 Jun 2022 • Max B. Paulus, Giulia Zarpellon, Andreas Krause, Laurent Charlin, Chris J. Maddison
Cutting planes are essential for solving mixed-integer linear problems (MILPs), because they facilitate bound improvements on the optimal solution value.
1 code implementation • 23 Jun 2022 • Mathieu Chevalley, Charlotte Bunne, Andreas Krause, Stefan Bauer
Learning representations that capture the underlying data generating process is a key problem for data efficient and robust use of neural networks.
no code implementations • 14 Jun 2022 • Mohammad Reza Karimi, Ya-Ping Hsieh, Panayotis Mertikopoulos, Andreas Krause
We examine a wide class of stochastic approximation algorithms for solving (stochastic) nonlinear problems on Riemannian manifolds.
1 code implementation • 10 Jun 2022 • David Lindner, Sebastian Tschiatschek, Katja Hofmann, Andreas Krause
We provide an instance-dependent lower bound for constrained linear best-arm identification and show that ACOL's sample complexity matches the lower bound in the worst-case.
1 code implementation • 4 Jun 2022 • Christian Toth, Lars Lorch, Christian Knoll, Andreas Krause, Franz Pernkopf, Robert Peharz, Julius von Kügelgen
In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, which jointly infers a posterior over causal models and queries of interest.
1 code implementation • 3 Jun 2022 • Alexander Hägele, Jonas Rothfuss, Lars Lorch, Vignesh Ram Somnath, Bernhard Schölkopf, Andreas Krause
Inferring causal structures from experimentation is a central task in many domains.
no code implementations • 26 May 2022 • Mojmír Mutný, Andreas Krause
In this work, we investigate the optimal design of experiments for {\em estimation of linear functionals in reproducing kernel Hilbert spaces (RKHSs)}.
1 code implementation • 25 May 2022 • Lars Lorch, Scott Sussex, Jonas Rothfuss, Andreas Krause, Bernhard Schölkopf
Rather than searching over structures, we train a variational inference model to directly predict the causal structure from observational or interventional data.
no code implementations • 9 Apr 2022 • Bhavya Sukhija, Nathanael Köhler, Miguel Zamora, Simon Zimmermann, Sebastian Curi, Andreas Krause, Stelian Coros
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car, and gives good performance in combination with trajectory optimization methods.
1 code implementation • NeurIPS 2021 • Vignesh Ram Somnath, Charlotte Bunne, Andreas Krause
This paper introduces a multi-scale graph construction of a protein -- HoloProt -- connecting surface to structure and sequence.
no code implementations • 26 Mar 2022 • Johannes Kirschner, Mojmir Mutný, Andreas Krause, Jaime Coello de Portugal, Nicole Hiller, Jochem Snuverink
Tuning machine parameters of particle accelerators is a repetitive and time-consuming task that is challenging to automate.
no code implementations • 14 Mar 2022 • Pier Giuseppe Sessa, Maryam Kamgarpour, Andreas Krause
We consider model-based multi-agent reinforcement learning, where the environment transition model is unknown and can only be learned via expensive interactions with the environment.
no code implementations • 11 Feb 2022 • Charlotte Bunne, Ya-Ping Hsieh, Marco Cuturi, Andreas Krause
The static optimal transport $(\mathrm{OT})$ problem between Gaussians seeks to recover an optimal map, or more generally a coupling, to morph a Gaussian into another.
no code implementations • 3 Feb 2022 • Ilija Bogunovic, Zihan Li, Andreas Krause, Jonathan Scarlett
We consider the sequential optimization of an unknown, continuous, and expensive to evaluate reward function, from noisy and adversarially corrupted observed rewards.
no code implementations • 1 Feb 2022 • Parnian Kassraie, Jonas Rothfuss, Andreas Krause
We demonstrate our approach on the kernelized bandit problem (a. k. a.~Bayesian optimization), where we establish regret bounds competitive with those given the true kernel.
1 code implementation • 24 Jan 2022 • Bhavya Sukhija, Matteo Turchetta, David Lindner, Andreas Krause, Sebastian Trimpe, Dominik Baumann
Learning optimal control policies directly on physical systems is challenging since even a single failure can lead to costly hardware damage.
1 code implementation • ICLR 2022 • Yarden As, Ilnura Usmanova, Sebastian Curi, Andreas Krause
Improving sample-efficiency and safety are crucial challenges when deploying reinforcement learning in high-stakes real world applications.
1 code implementation • ICLR 2022 • Octavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi Jaakkola, Andreas Krause
Protein complex formation is a central problem in biology, being involved in most of the cell's processes, and essential for applications, e. g. drug design or protein engineering.
no code implementations • NeurIPS 2021 • Ilija Bogunovic, Andreas Krause
Instead, we introduce a \emph{misspecified} kernelized bandit setting where the unknown function can be $\epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS).
1 code implementation • NeurIPS 2021 • Anastasiia Makarova, Ilnura Usmanova, Ilija Bogunovic, Andreas Krause
We generalize BO to trade mean and input-dependent variance of the objective, both of which we assume to be unknown a priori.
1 code implementation • NeurIPS 2021 • Andreas Schlaginhaufen, Philippe Wenk, Andreas Krause, Florian Dörfler
To this end, neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed.
no code implementations • 22 Oct 2021 • Elvis Nava, Mojmír Mutný, Andreas Krause
In Bayesian Optimization (BO) we study black-box function optimization with noisy point evaluations and Bayesian priors.
1 code implementation • 21 Oct 2021 • Mojmír Mutný, Andreas Krause
We study adaptive sensing of Cox point processes, a widely used model from spatial statistics.
1 code implementation • NeurIPS 2021 • Jonas Gehring, Gabriel Synnaeve, Andreas Krause, Nicolas Usunier
We alleviate the need for prior knowledge by proposing a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
no code implementations • 26 Sep 2021 • Zalán Borsos, Mojmír Mutný, Marco Tagliasacchi, Andreas Krause
We show the effectiveness of our framework for a wide range of models in various settings, including training non-convex models online and batch active learning.
no code implementations • NeurIPS 2020 • Pier Giuseppe Sessa, Ilija Bogunovic, Andreas Krause, Maryam Kamgarpour
We formulate the novel class of contextual games, a type of repeated games driven by contextual information at each round.
no code implementations • 8 Jul 2021 • Barna Pásztor, Ilija Bogunovic, Andreas Krause
Learning in multi-agent systems is highly challenging due to several factors including the non-stationarity introduced by agents' interactions and the combinatorial nature of their state and action spaces.
1 code implementation • 7 Jul 2021 • Parnian Kassraie, Andreas Krause
Contextual bandits are a rich model for sequential decision making given side information, with important applications, e. g., in recommender systems.
1 code implementation • NeurIPS 2021 • Lenart Treven, Philippe Wenk, Florian Dörfler, Andreas Krause
Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification.
1 code implementation • 14 Jun 2021 • Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, Andreas Krause
Most current classifiers are vulnerable to adversarial examples, small input perturbations that change the classification output.
2 code implementations • 11 Jun 2021 • Charlotte Bunne, Laetitia Meng-Papaxanthos, Andreas Krause, Marco Cuturi
We propose to model these trajectories as collective realizations of a causal Jordan-Kinderlehrer-Otto (JKO) flow of measures: The JKO scheme posits that the new configuration taken by a population at time $t+1$ is one that trades off an improvement, in the sense that it decreases an energy, while remaining close (in Wasserstein distance) to the previous configuration observed at $t$.
2 code implementations • NeurIPS 2021 • Tobias Sutter, Andreas Krause, Daniel Kuhn
Training models that perform well under distribution shifts is a central challenge in machine learning.
no code implementations • NeurIPS 2021 • Jonas Rothfuss, Dominique Heyn, Jinfan Chen, Andreas Krause
When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks.
no code implementations • ICLR 2022 • Yatao Bian, Yu Rong, Tingyang Xu, Jiaxiang Wu, Andreas Krause, Junzhou Huang
By running fixed point iteration for multiple steps, we achieve a trajectory of the valuations, among which we define the valuation with the best conceivable decoupling error as the Variational Index.
no code implementations • arXiv 2021 • Vignesh Ram Somnath, Charlotte Bunne, Connor W. Coley, Andreas Krause, Regina Barzilay
Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule.
Ranked #7 on Single-step retrosynthesis on USPTO-50k
1 code implementation • 2 Jun 2021 • David Lindner, Hoda Heidari, Andreas Krause
To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm.
1 code implementation • ICCV 2021 • Mikhail Usvyatsov, Anastasia Makarova, Rafael Ballester-Ripoll, Maxim Rakhuba, Andreas Krause, Konrad Schindler
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking at a fraction of their entries only.
1 code implementation • NeurIPS 2021 • Scott Sussex, Andreas Krause, Caroline Uhler
Causal structure learning is a key problem in many domains.
2 code implementations • NeurIPS 2021 • Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause
In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation.
no code implementations • 25 May 2021 • Johannes Kirschner, Andreas Krause
We consider Bayesian optimization in settings where observations can be adversarially biased, for example by an uncontrolled hidden confounder.
no code implementations • 21 May 2021 • Andreas Krause
I demonstrate that with the market return determined by the equilibrium returns of the CAPM, expected returns of an asset are affected by the risks of all assets jointly.
1 code implementation • NeurIPS 2021 • Manuel Wüthrich, Bernhard Schölkopf, Andreas Krause
These regret bounds illuminate the relationship between the number of evaluations, the domain size (i. e. cardinality of finite domains / Lipschitz constant of the covariance function in continuous domains), and the optimality of the retrieved function value.
1 code implementation • 16 Apr 2021 • Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau
Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.
no code implementations • 18 Mar 2021 • Sebastian Curi, Ilija Bogunovic, Andreas Krause
In real-world tasks, reinforcement learning (RL) agents frequently encounter situations that are not present during training time.
Model-based Reinforcement Learning reinforcement-learning +2
1 code implementation • NeurIPS 2021 • David Lindner, Matteo Turchetta, Sebastian Tschiatschek, Kamil Ciosek, Andreas Krause
For many reinforcement learning (RL) applications, specifying a reward is difficult.
1 code implementation • ICLR 2021 • Núria Armengol Urpí, Sebastian Curi, Andreas Krause
We demonstrate empirically that in the presence of natural distribution-shifts, O-RAAC learns policies with good average performance.
no code implementations • 21 Jan 2021 • Marc Jourdan, Mojmír Mutný, Johannes Kirschner, Andreas Krause
Combinatorial bandits with semi-bandit feedback generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.
no code implementations • 19 Jan 2021 • Christopher König, Matteo Turchetta, John Lygeros, Alisa Rupenyan, Andreas Krause
Thus, our approach builds on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization.
no code implementations • 1 Jan 2021 • Jonas Rothfuss, Martin Josifoski, Andreas Krause
Bayesian deep learning is a promising approach towards improved uncertainty quantification and sample efficiency.
no code implementations • 21 Oct 2020 • Joan Bas-Serrano, Sebastian Curi, Andreas Krause, Gergely Neu
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
1 code implementation • 19 Oct 2020 • Mohammad Reza Karimi, Nezihe Merve Gürel, Bojan Karlaš, Johannes Rausch, Ce Zhang, Andreas Krause
Given $k$ pre-trained classifiers and a stream of unlabeled data examples, how can we actively decide when to query a label so that we can distinguish the best model from the rest while making a small number of queries?
1 code implementation • 19 Oct 2020 • Zalán Borsos, Marco Tagliasacchi, Andreas Krause
Active learning is an effective technique for reducing the labeling cost by improving data efficiency.
5 code implementations • ICLR 2021 • Max B. Paulus, Chris J. Maddison, Andreas Krause
Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance.
3 code implementations • 1 Oct 2020 • Chris Wendler, Andisheh Amrollahi, Bastian Seifert, Andreas Krause, Markus Püschel
Many applications of machine learning on discrete domains, such as learning preference functions in recommender systems or auctions, can be reduced to estimating a set function that is sparse in the Fourier domain.
1 code implementation • NeurIPS 2020 • Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
no code implementations • 7 Jul 2020 • Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett
We consider a stochastic linear bandit problem in which the rewards are not only subject to random noise, but also adversarial attacks subject to a suitable budget $C$ (i. e., an upper bound on the sum of corruption magnitudes across the time horizon).
no code implementations • 24 Jun 2020 • Yatao Bian, Joachim M. Buhmann, Andreas Krause
We start by a thorough characterization of the class of continuous submodular functions, and show that continuous submodularity is equivalent to a weak version of the diminishing returns (DR) property.
1 code implementation • NeurIPS 2020 • Matteo Turchetta, Andrey Kolobov, Shital Shah, Andreas Krause, Alekh Agarwal
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
no code implementations • 19 Jun 2020 • Lenart Treven, Sebastian Curi, Mojmir Mutny, Andreas Krause
The principal task to control dynamical systems is to ensure their stability.
1 code implementation • NeurIPS 2020 • Sebastian Curi, Felix Berkenkamp, Andreas Krause
Based on this theoretical foundation, we show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models.
Model-based Reinforcement Learning reinforcement-learning +2
1 code implementation • NeurIPS 2020 • Max B. Paulus, Dami Choi, Daniel Tarlow, Andreas Krause, Chris J. Maddison
The Gumbel-Max trick is the basis of many relaxed gradient estimators.
2 code implementations • NeurIPS 2021 • Vignesh Ram Somnath, Charlotte Bunne, Connor W. Coley, Andreas Krause, Regina Barzilay
Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule.
no code implementations • L4DC 2020 • Sebastian Curi, Silvan Melchior, Felix Berkenkamp, Andreas Krause
Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.
no code implementations • L4DC 2020 • Ilnura Usmanova, Andreas Krause, Maryam Kamgarpour
For safety-critical black-box optimization tasks, observations of the constraints and the objective are often noisy and available only for the feasible points.
1 code implementation • NeurIPS 2020 • Zalán Borsos, Mojmír Mutný, Andreas Krause
Coresets are small data summaries that are sufficient for model training.
no code implementations • ICML 2020 • Aytunc Sahin, Yatao Bian, Joachim M. Buhmann, Andreas Krause
Submodular functions have been studied extensively in machine learning and data mining.
1 code implementation • 2 Apr 2020 • Ankit Dhall, Anastasia Makarova, Octavian Ganea, Dario Pavllo, Michael Greeff, Andreas Krause
Image classification has been studied extensively, but there has been limited work in using unconventional, external guidance other than traditional image-label pairs for training.
1 code implementation • 5 Mar 2020 • Emmanouil Angelis, Philippe Wenk, Bernhard Schölkopf, Stefan Bauer, Andreas Krause
Gaussian processes are an important regression tool with excellent analytic properties which allow for direct integration of derivative observations.
no code implementations • 4 Mar 2020 • Ilija Bogunovic, Andreas Krause, Jonathan Scarlett
We consider the problem of optimizing an unknown (typically non-convex) function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS), based on noisy bandit feedback.
no code implementations • 28 Feb 2020 • Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause
We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.
no code implementations • 25 Feb 2020 • Johannes Kirschner, Tor Lattimore, Andreas Krause
Partial monitoring is a rich framework for sequential decision making under uncertainty that generalizes many well known bandit models, including linear, combinatorial and dueling bandits.
no code implementations • 20 Feb 2020 • Johannes Kirschner, Ilija Bogunovic, Stefanie Jegelka, Andreas Krause
Attaining such robustness is the goal of distributionally robust optimization, which seeks a solution to an optimization problem that is worst-case robust under a specified distributional shift of an uncontrolled covariate.
3 code implementations • ICML Workshop LifelongML 2020 • Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, Andreas Krause
Meta-learning can successfully acquire useful inductive biases from data.
1 code implementation • NeurIPS 2019 • Andisheh Amrollahi, Amir Zandieh, Michael Kapralov, Andreas Krause
In this paper we consider the problem of efficiently learning set functions that are defined over a ground set of size $n$ and that are sparse (say $k$-sparse) in the Fourier domain.
no code implementations • 8 Nov 2019 • Mohammad Yaghini, Andreas Krause, Hoda Heidari
Our family of fairness notions corresponds to a new interpretation of economic models of Equality of Opportunity (EOP), and it includes most existing notions of fairness as special cases.
no code implementations • NeurIPS 2019 • Matteo Turchetta, Felix Berkenkamp, Andreas Krause
Existing algorithms for this problem learn about the safety of all decisions to ensure convergence.
no code implementations • 29 Oct 2019 • Matteo Turchetta, Andreas Krause, Sebastian Trimpe
In reinforcement learning (RL), an autonomous agent learns to perform complex tasks by maximizing an exogenous reward signal while interacting with its environment.
1 code implementation • NeurIPS 2020 • Sebastian Curi, Kfir. Y. Levy, Stefanie Jegelka, Andreas Krause
In high-stakes machine learning applications, it is crucial to not only perform well on average, but also when restricted to difficult examples.
no code implementations • 25 Oct 2019 • Mojmír Mutný, Michał Dereziński, Andreas Krause
We analyze the convergence rate of the randomized Newton-like method introduced by Qu et.
1 code implementation • NeurIPS 2019 • Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause
We consider the problem of learning to play a repeated multi-agent game with an unknown reward function.
1 code implementation • 21 Jul 2019 • Jonas Rothfuss, Fabio Ferreira, Simon Boehm, Simon Walther, Maxim Ulrich, Tamim Asfour, Andreas Krause
To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training.
1 code implementation • 16 Jul 2019 • Silvan Melchior, Sebastian Curi, Felix Berkenkamp, Andreas Krause
Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.
no code implementations • 2 Jul 2019 • Erik Daxberger, Anastasia Makarova, Matteo Turchetta, Andreas Krause
However, few methods exist for mixed-variable domains and none of them can handle discrete constraints that arise in many real-world applications.
no code implementations • 28 Jun 2019 • Marcello Fiducioso, Sebastian Curi, Benedikt Schumacher, Markus Gwerder, Andreas Krause
Furthermore, this successful attempt paves the way for further use at different levels of HVAC systems, with promising energy, operational, and commissioning costs savings, and it is a practical demonstration of the positive effects that Artificial Intelligence can have on environmental sustainability.
1 code implementation • 27 Jun 2019 • Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause
We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.
1 code implementation • NeurIPS 2019 • Johannes Kirschner, Andreas Krause
We introduce a stochastic contextual bandit model where at each time step the environment chooses a distribution over a context set and samples the context from this distribution.
no code implementations • 14 May 2019 • Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka
Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety.
no code implementations • ICLR 2019 • Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Thomas Hofmann, Andreas Krause
Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem.
1 code implementation • 29 Mar 2019 • Zalán Borsos, Sebastian Curi, Kfir. Y. Levy, Andreas Krause
Adaptive importance sampling for stochastic optimization is a promising approach that offers improved convergence through variance reduction.
1 code implementation • 22 Feb 2019 • Gabriele Abbati, Philippe Wenk, Michael A. Osborne, Andreas Krause, Bernhard Schölkopf, Stefan Bauer
Stochastic differential equations are an important modeling class in many disciplines.
no code implementations • 21 Feb 2019 • Pragnya Alatur, Kfir. Y. Levy, Andreas Krause
We consider a setting where multiple players sequentially choose among a common set of actions (arms).
2 code implementations • 17 Feb 2019 • Philippe Wenk, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, Stefan Bauer
Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting.
1 code implementation • NeurIPS 2019 • Marko Mitrovic, Ehsan Kazemi, Moran Feldman, Andreas Krause, Amin Karbasi
In many machine learning applications, one needs to interactively select a sequence of items (e. g., recommending movies based on a user's feedback) or make sequential decisions in a certain order (e. g., guiding an agent through a series of states).
2 code implementations • 8 Feb 2019 • Johannes Kirschner, Mojmír Mutný, Nicole Hiller, Rasmus Ischebeck, Andreas Krause
In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently.
no code implementations • 10 Jan 2019 • Felix Berkenkamp, Angela P. Schoellig, Andreas Krause
In this paper, we present the first BO algorithm that is provably no-regret and converges to the optimum without knowledge of the hyperparameters.
1 code implementation • ICLR 2019 • Nikolay Nikolov, Johannes Kirschner, Felix Berkenkamp, Andreas Krause
Efficient exploration remains a major challenge for reinforcement learning.
no code implementations • NeurIPS 2018 • Josip Djolonga, Stefanie Jegelka, Andreas Krause
Submodular maximization problems appear in several