Search Results for author: Andreas Krause

Found 236 papers, 93 papers with code

Teaching Multiple Concepts to a Forgetful Learner

no code implementations NeurIPS 2019 Anette Hunziker, Yuxin Chen, Oisin Mac Aodha, Manuel Gomez Rodriguez, Andreas Krause, Pietro Perona, Yisong Yue, Adish Singla

Our framework is both generic, allowing the design of teaching schedules for different memory models, and also interactive, allowing the teacher to adapt the schedule to the underlying forgetting mechanisms of the learner.

Scheduling

Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

no code implementations NeurIPS 2018 Hoda Heidari, Claudio Ferrari, Krishna P. Gummadi, Andreas Krause

We draw attention to an important, yet largely overlooked aspect of evaluating fairness for automated decision making systems---namely risk and welfare considerations.

Decision Making Fairness

Optimal DR-Submodular Maximization and Applications to Provable Mean Field Inference

no code implementations19 May 2018 An Bian, Joachim M. Buhmann, Andreas Krause

Mean field inference in probabilistic models is generally a highly nonconvex problem.

Information Directed Sampling and Bandits with Heteroscedastic Noise

no code implementations29 Jan 2018 Johannes Kirschner, Andreas Krause

In the stochastic bandit problem, the goal is to maximize an unknown function via a sequence of noisy evaluations.

Bayesian Optimization Thompson Sampling

Training Gaussian Mixture Models at Scale via Coresets

no code implementations23 Mar 2017 Mario Lucic, Matthew Faulkner, Andreas Krause, Dan Feldman

In this work we show how to construct coresets for mixtures of Gaussians.

Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization

no code implementations21 Mar 2010 Daniel Golovin, Andreas Krause

Solving stochastic optimization problems under partial observability, where one needs to adaptively make decisions with uncertain outcomes, is a fundamental but notoriously difficult challenge.

Active Learning Marketing +1

Learning User Preferences to Incentivize Exploration in the Sharing Economy

no code implementations17 Nov 2017 Christoph Hirnschall, Adish Singla, Sebastian Tschiatschek, Andreas Krause

We provide formal guarantees on the performance of our algorithm and test the viability of our approach in a user study with data of apartments on Airbnb.

Stochastic Submodular Maximization: The Case of Coverage Functions

no code implementations NeurIPS 2017 Mohammad Reza Karimi, Mario Lucic, Hamed Hassani, Andreas Krause

By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions.

Clustering Stochastic Optimization

Learning Implicit Generative Models Using Differentiable Graph Tests

no code implementations4 Sep 2017 Josip Djolonga, Andreas Krause

Recently, there has been a growing interest in the problem of learning rich implicit models - those from which we can sample, but can not evaluate their density.

Stochastic Optimization

Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting

no code implementations16 Mar 2017 Yuxin Chen, Jean-Michel Renders, Morteza Haghir Chehreghani, Andreas Krause

We consider the optimal value of information (VoI) problem, where the goal is to sequentially select a set of tests with a minimal cost, so that one can efficiently make the best decision based on the observed outcomes.

Algorithms for Learning Sparse Additive Models with Interactions in High Dimensions

no code implementations2 May 2016 Hemant Tyagi, Anastasios Kyrillidis, Bernd Gärtner, Andreas Krause

A function $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is a Sparse Additive Model (SPAM), if it is of the form $f(\mathbf{x}) = \sum_{l \in \mathcal{S}}\phi_{l}(x_l)$ where $\mathcal{S} \subset [d]$, $|\mathcal{S}| \ll d$.

Additive models Vocal Bursts Intensity Prediction

Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains

no code implementations17 Jun 2016 Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, Andreas Krause

Submodular continuous functions are a category of (generally) non-convex/non-concave functions with a wide spectrum of applications.

Data Summarization energy management +1

Uniform Deviation Bounds for Unbounded Loss Functions like k-Means

no code implementations27 Feb 2017 Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are *unbounded*.

Clustering

Learning to Use Learners' Advice

no code implementations16 Feb 2017 Adish Singla, Hamed Hassani, Andreas Krause

In our setting, the feedback at any time $t$ is limited in a sense that it is only available to the expert $i^t$ that has been selected by the central algorithm (forecaster), \emph{i. e.}, only the expert $i^t$ receives feedback from the environment and gets to learn at time $t$.

Blocking Multi-Armed Bandits

Coordinated Online Learning With Applications to Learning User Preferences

no code implementations9 Feb 2017 Christoph Hirnschall, Adish Singla, Sebastian Tschiatschek, Andreas Krause

We study an online multi-task learning setting, in which instances of related tasks arrive sequentially, and are handled by task-specific online learners.

Multi-Task Learning

Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation

no code implementations NeurIPS 2016 Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher

We present a new algorithm, truncated variance reduction (TruVaR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion.

Bayesian Optimization Gaussian Processes

Near-optimal Bayesian Active Learning with Correlated and Noisy Tests

no code implementations24 May 2016 Yuxin Chen, S. Hamed Hassani, Andreas Krause

We consider the Bayesian active learning and experimental design problem, where the goal is to learn the value of some unknown target variable through a sequence of informative, noisy tests.

Active Learning Experimental Design

Distributed Submodular Maximization

no code implementations3 Nov 2014 Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause

Such problems can often be reduced to maximizing a submodular set function subject to various constraints.

Clustering

Horizontally Scalable Submodular Maximization

no code implementations31 May 2016 Mario Lucic, Olivier Bachem, Morteza Zadimoghaddam, Andreas Krause

A variety of large-scale machine learning problems can be cast as instances of constrained submodular maximization.

Actively Learning Hemimetrics with Applications to Eliciting User Preferences

no code implementations23 May 2016 Adish Singla, Sebastian Tschiatschek, Andreas Krause

We propose an active learning algorithm that substantially reduces this sample complexity by exploiting the structural constraints on the version space of hemimetrics.

Active Learning

Better safe than sorry: Risky function exploitation through safe optimization

no code implementations2 Feb 2016 Eric Schulz, Quentin J. M. Huys, Dominik R. Bach, Maarten Speekenbrink, Andreas Krause

Exploration-exploitation of functions, that is learning and optimizing a mapping between inputs and expected outputs, is ubiquitous to many real world situations.

Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning

no code implementations2 May 2016 Mario Lucic, Mesrob I. Ohannessian, Amin Karbasi, Andreas Krause

Using k-means clustering as a prototypical unsupervised learning problem, we show how we can strategically summarize the data (control space) in order to trade off risk and time when data is generated by a probabilistic model.

Clustering Navigate

Strong Coresets for Hard and Soft Bregman Clustering with Applications to Exponential Family Mixtures

no code implementations21 Aug 2015 Mario Lucic, Olivier Bachem, Andreas Krause

We propose a single, practical algorithm to construct strong coresets for a large class of hard and soft clustering problems based on Bregman divergences.

Clustering

Learning Sparse Additive Models with Interactions in High Dimensions

no code implementations18 Apr 2016 Hemant Tyagi, Anastasios Kyrillidis, Bernd Gärtner, Andreas Krause

For some $\mathcal{S}_1 \subset [d], \mathcal{S}_2 \subset {[d] \choose 2}$, the function $f$ is assumed to be of the form: $$f(\mathbf{x}) = \sum_{p \in \mathcal{S}_1}\phi_{p} (x_p) + \sum_{(l, l^{\prime}) \in \mathcal{S}_2}\phi_{(l, l^{\prime})} (x_{l}, x_{l^{\prime}}).$$ Assuming $\phi_{p},\phi_{(l, l^{\prime})}$, $\mathcal{S}_1$ and, $\mathcal{S}_2$ to be unknown, we provide a randomized algorithm that queries $f$ and exactly recovers $\mathcal{S}_1,\mathcal{S}_2$.

Additive models Vocal Bursts Intensity Prediction

Noisy Submodular Maximization via Adaptive Sampling with Applications to Crowdsourced Image Collection Summarization

no code implementations23 Nov 2015 Adish Singla, Sebastian Tschiatschek, Andreas Krause

When the underlying submodular function is unknown, users' feedback can provide noisy evaluations of the function that we seek to maximize.

Learning to Hire Teams

no code implementations12 Aug 2015 Adish Singla, Eric Horvitz, Pushmeet Kohli, Andreas Krause

Furthermore, we consider an embedding of the tasks and workers in an underlying graph that may arise from task similarities or social ties, and that can provide additional side-observations for faster learning.

Crowd Access Path Optimization: Diversity Matters

no code implementations8 Aug 2015 Besmira Nushi, Adish Singla, Anja Gruenheid, Erfan Zamanian, Andreas Krause, Donald Kossmann

Based on this intuitive idea, we introduce the Access Path Model (APM), a novel crowd model that leverages the notion of access paths as an alternative way of retrieving information.

Building Hierarchies of Concepts via Crowdsourcing

no code implementations27 Apr 2015 Yuyin Sun, Adish Singla, Dieter Fox, Andreas Krause

Hierarchies of concepts are useful in many applications from navigation to organization of objects.

Discovering Valuable Items from Massive Data

no code implementations2 Jun 2015 Hastagiri P. Vanchinathan, Andreas Marfurt, Charles-Antoine Robelin, Donald Kossmann, Andreas Krause

Given a budget on the cumulative cost of the selected items, how can we pick a subset of maximal value?

Recommendation Systems

Information Gathering in Networks via Active Exploration

no code implementations24 Apr 2015 Adish Singla, Eric Horvitz, Pushmeet Kohli, Ryen White, Andreas Krause

How should we gather information in a network, where each node's visibility is limited to its local neighborhood?

Experimental Design Informativeness +1

Lazier Than Lazy Greedy

no code implementations28 Sep 2014 Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, Andreas Krause

Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice?

Clustering Data Summarization

Near-Optimally Teaching the Crowd to Classify

no code implementations10 Feb 2014 Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, Andreas Krause

How should we present training examples to learners to teach them classification rules?

A Utility-Theoretic Approach to Privacy in Online Services

no code implementations16 Jan 2014 Andreas Krause, Eric Horvitz

We introduce and explore an economics of privacy in personalization, where people can opt to share personal information, in a standing or on-demand manner, in return for expected enhancements in the quality of an online service.

Optimal Value of Information in Graphical Models

no code implementations15 Jan 2014 Andreas Krause, Carlos Guestrin

In a sensor network, for example, it is important to select the subset of sensors that is expected to provide the strongest reduction in uncertainty.

Decision Making Scheduling

Efficient Informative Sensing using Multiple Robots

no code implementations15 Jan 2014 Amarjeet Singh, Andreas Krause, Carlos Guestrin, William J. Kaiser

In this paper, we present an efficient approach for near-optimally solving the NP-hard optimization problem of planning such informative paths.

Near-Optimal Bayesian Active Learning with Noisy Observations

no code implementations NeurIPS 2010 Daniel Golovin, Andreas Krause, Debajyoti Ray

In the case of noise-free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near-optimally.

Active Learning Experimental Design

Incentives for Privacy Tradeoff in Community Sensing

no code implementations19 Aug 2013 Adish Singla, Andreas Krause

Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications.

Adaptive Input Estimation in Linear Dynamical Systems with Applications to Learning-from-Observations

no code implementations19 Jun 2018 Sebastian Curi, Kfir. Y. Levy, Andreas Krause

To this end, we introduce a novel estimation algorithm that explicitly trades off bias and variance to optimally reduce the overall estimation error.

Imitation Learning

Discrete Sampling using Semigradient-based Product Mixtures

no code implementations4 Jul 2018 Alkis Gotovos, Hamed Hassani, Andreas Krause, Stefanie Jegelka

We consider the problem of inference in discrete probabilistic models, that is, distributions over subsets of a finite ground set.

Point Processes

A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity

no code implementations10 Sep 2018 Hoda Heidari, Michele Loi, Krishna P. Gummadi, Andreas Krause

In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness.

Fairness Philosophy

Learning to Compensate Photovoltaic Power Fluctuations from Images of the Sky by Imitating an Optimal Policy

no code implementations13 Nov 2018 Robin Spiess, Felix Berkenkamp, Jan Poland, Andreas Krause

In this paper, we present a deep learning approach that uses images of the sky to compensate power fluctuations predictively and reduces battery stress.

Imitation Learning

Provable Variational Inference for Constrained Log-Submodular Models

no code implementations NeurIPS 2018 Josip Djolonga, Stefanie Jegelka, Andreas Krause

Submodular maximization problems appear in several areas of machine learning and data science, as many useful modelling concepts such as diversity and coverage satisfy this natural diminishing returns property.

Variational Inference

Interactive Submodular Bandit

no code implementations NeurIPS 2017 Lin Chen, Andreas Krause, Amin Karbasi

We then receive a noisy feedback about the utility of the action (e. g., ratings) which we model as a submodular function over the context-action space.

Data Summarization Movie Recommendation +1

Differentiable Learning of Submodular Models

no code implementations NeurIPS 2017 Josip Djolonga, Andreas Krause

In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible.

Variational Inference

Cooperative Graphical Models

no code implementations NeurIPS 2016 Josip Djolonga, Stefanie Jegelka, Sebastian Tschiatschek, Andreas Krause

We study a rich family of distributions that capture variable interactions significantly more expressive than those representable with low-treewidth or pairwise graphical models, or log-supermodular models.

Variational Inference

Variational Inference in Mixed Probabilistic Submodular Models

no code implementations NeurIPS 2016 Josip Djolonga, Sebastian Tschiatschek, Andreas Krause

We consider the problem of variational inference in probabilistic models with both log-submodular and log-supermodular higher-order potentials.

Variational Inference

Sampling from Probabilistic Submodular Models

no code implementations NeurIPS 2015 Alkis Gotovos, Hamed Hassani, Andreas Krause

Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively.

Point Processes

Efficient Sampling for Learning Sparse Additive Models in High Dimensions

no code implementations NeurIPS 2014 Hemant Tyagi, Bernd Gärtner, Andreas Krause

We consider the problem of learning sparse additive models, i. e., functions of the form: $f(\vecx) = \sum_{l \in S} \phi_{l}(x_l)$, $\vecx \in \matR^d$ from point queries of $f$.

Additive models Compressive Sensing +1

Efficient Partial Monitoring with Prior Information

no code implementations NeurIPS 2014 Hastagiri P. Vanchinathan, Gábor Bartók, Andreas Krause

In every round, the learner suffers some loss and receives some feedback based on the action and the outcome.

High-Dimensional Gaussian Process Bandits

no code implementations NeurIPS 2013 Josip Djolonga, Andreas Krause, Volkan Cevher

Many applications in machine learning require optimizing unknown functions defined over a high-dimensional space from noisy samples that are expensive to obtain.

Bayesian Optimization Vocal Bursts Intensity Prediction

Scalable Training of Mixture Models via Coresets

no code implementations NeurIPS 2011 Dan Feldman, Matthew Faulkner, Andreas Krause

In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations.

Density Estimation

Contextual Gaussian Process Bandit Optimization

no code implementations NeurIPS 2011 Andreas Krause, Cheng S. Ong

How should we design experiments to maximize performance of a complex system, taking into account uncontrollable environmental conditions?

Management

Efficient Minimization of Decomposable Submodular Functions

no code implementations NeurIPS 2010 Peter Stobbe, Andreas Krause

Decomposable submodular functions are those that can be represented as sums of concave functions applied to linear functions.

Online Learning of Assignments

no code implementations NeurIPS 2009 Matthew Streeter, Daniel Golovin, Andreas Krause

Which ads should we display in sponsored search in order to maximize our revenue?

Distributed and Provably Good Seedings for k-Means in Constant Rounds

no code implementations ICML 2017 Olivier Bachem, Mario Lucic, Andreas Krause

The k-Means++ algorithm is the state of the art algorithm to solve k-Means clustering problems as the computed clusterings are O(log k) competitive in expectation.

Clustering

Uniform Deviation Bounds for k-Means Clustering

no code implementations ICML 2017 Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are unbounded.

Clustering

Probabilistic Submodular Maximization in Sub-Linear Time

no code implementations ICML 2017 Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi

As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e. g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set.

Recommendation Systems

Evaluating GANs via Duality

no code implementations ICLR 2019 Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Thomas Hofmann, Andreas Krause

Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem.

No-Regret Bayesian Optimization with Unknown Hyperparameters

no code implementations10 Jan 2019 Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

In this paper, we present the first BO algorithm that is provably no-regret and converges to the optimum without knowledge of the hyperparameters.

Bayesian Optimization

Fake News Detection in Social Networks via Crowd Signals

no code implementations24 Nov 2017 Sebastian Tschiatschek, Adish Singla, Manuel Gomez Rodriguez, Arpit Merchant, Andreas Krause

The main objective of our work is to minimize the spread of misinformation by stopping the propagation of fake news in the network.

Social and Information Networks

Multi-Player Bandits: The Adversarial Case

no code implementations21 Feb 2019 Pragnya Alatur, Kfir. Y. Levy, Andreas Krause

We consider a setting where multiple players sequentially choose among a common set of actions (arms).

Learning Generative Models across Incomparable Spaces

no code implementations14 May 2019 Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety.

Relational Reasoning

Stochastic Bandits with Context Distributions

1 code implementation NeurIPS 2019 Johannes Kirschner, Andreas Krause

We introduce a stochastic contextual bandit model where at each time step the environment chooses a distribution over a context set and samples the context from this distribution.

Safe Contextual Bayesian Optimization for Sustainable Room Temperature PID Control Tuning

no code implementations28 Jun 2019 Marcello Fiducioso, Sebastian Curi, Benedikt Schumacher, Markus Gwerder, Andreas Krause

Furthermore, this successful attempt paves the way for further use at different levels of HVAC systems, with promising energy, operational, and commissioning costs savings, and it is a practical demonstration of the positive effects that Artificial Intelligence can have on environmental sustainability.

Bayesian Optimization

Mixed-Variable Bayesian Optimization

no code implementations2 Jul 2019 Erik Daxberger, Anastasia Makarova, Matteo Turchetta, Andreas Krause

However, few methods exist for mixed-variable domains and none of them can handle discrete constraints that arise in many real-world applications.

Bayesian Optimization Thompson Sampling

Robust Model-free Reinforcement Learning with Multi-objective Bayesian Optimization

no code implementations29 Oct 2019 Matteo Turchetta, Andreas Krause, Sebastian Trimpe

In reinforcement learning (RL), an autonomous agent learns to perform complex tasks by maximizing an exogenous reward signal while interacting with its environment.

Bayesian Optimization reinforcement-learning +1

A Human-in-the-loop Framework to Construct Context-aware Mathematical Notions of Outcome Fairness

no code implementations8 Nov 2019 Mohammad Yaghini, Andreas Krause, Hoda Heidari

Our family of fairness notions corresponds to a new interpretation of economic models of Equality of Opportunity (EOP), and it includes most existing notions of fairness as special cases.

Decision Making Fairness

Distributionally Robust Bayesian Optimization

no code implementations20 Feb 2020 Johannes Kirschner, Ilija Bogunovic, Stefanie Jegelka, Andreas Krause

Attaining such robustness is the goal of distributionally robust optimization, which seeks a solution to an optimization problem that is worst-case robust under a specified distributional shift of an uncontrolled covariate.

Bayesian Optimization

Information Directed Sampling for Linear Partial Monitoring

no code implementations25 Feb 2020 Johannes Kirschner, Tor Lattimore, Andreas Krause

Partial monitoring is a rich framework for sequential decision making under uncertainty that generalizes many well known bandit models, including linear, combinatorial and dueling bandits.

Decision Making Decision Making Under Uncertainty

Mixed Strategies for Robust Optimization of Unknown Objectives

no code implementations28 Feb 2020 Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause

We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.

Autonomous Vehicles Gaussian Processes +1

Corruption-Tolerant Gaussian Process Bandit Optimization

no code implementations4 Mar 2020 Ilija Bogunovic, Andreas Krause, Jonathan Scarlett

We consider the problem of optimizing an unknown (typically non-convex) function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS), based on noisy bandit feedback.

Continuous Submodular Function Maximization

no code implementations24 Jun 2020 Yatao Bian, Joachim M. Buhmann, Andreas Krause

We start by a thorough characterization of the class of continuous submodular functions, and show that continuous submodularity is equivalent to a weak version of the diminishing returns (DR) property.

Stochastic Linear Bandits Robust to Adversarial Attacks

no code implementations7 Jul 2020 Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett

We consider a stochastic linear bandit problem in which the rewards are not only subject to random noise, but also adversarial attacks subject to a suitable budget $C$ (i. e., an upper bound on the sum of corruption magnitudes across the time horizon).

Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory

no code implementations1 Jan 2021 Jonas Rothfuss, Martin Josifoski, Andreas Krause

Bayesian deep learning is a promising approach towards improved uncertainty quantification and sample efficiency.

Meta-Learning Uncertainty Quantification +1

Logistic Q-Learning

no code implementations21 Oct 2020 Joan Bas-Serrano, Sebastian Curi, Andreas Krause, Gergely Neu

We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.

Q-Learning Reinforcement Learning (RL)

Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback

no code implementations21 Jan 2021 Marc Jourdan, Mojmír Mutný, Johannes Kirschner, Andreas Krause

Combinatorial bandits with semi-bandit feedback generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.

Multi-Armed Bandits

Regret Bounds for Gaussian-Process Optimization in Large Domains

1 code implementation NeurIPS 2021 Manuel Wüthrich, Bernhard Schölkopf, Andreas Krause

These regret bounds illuminate the relationship between the number of evaluations, the domain size (i. e. cardinality of finite domains / Lipschitz constant of the covariance function in continuous domains), and the optimality of the retrieved function value.

A note on the CAPM with endogenously consistent market returns

no code implementations21 May 2021 Andreas Krause

I demonstrate that with the market return determined by the equilibrium returns of the CAPM, expected returns of an asset are affected by the risks of all assets jointly.

Bias-Robust Bayesian Optimization via Dueling Bandits

no code implementations25 May 2021 Johannes Kirschner, Andreas Krause

We consider Bayesian optimization in settings where observations can be adversarially biased, for example by an uncontrolled hidden confounder.

Bayesian Optimization

Addressing the Long-term Impact of ML Decisions via Policy Regret

1 code implementation2 Jun 2021 David Lindner, Hoda Heidari, Andreas Krause

To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm.

Multi-Armed Bandits

Meta-Learning Reliable Priors in the Function Space

no code implementations NeurIPS 2021 Jonas Rothfuss, Dominique Heyn, Jinfan Chen, Andreas Krause

When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks.

Bayesian Optimization Decision Making +2

Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning

no code implementations ICLR 2022 Yatao Bian, Yu Rong, Tingyang Xu, Jiaxiang Wu, Andreas Krause, Junzhou Huang

By running fixed point iteration for multiple steps, we achieve a trajectory of the valuations, among which we define the valuation with the best conceivable decoupling error as the Variational Index.

Data Valuation Variational Inference

Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning

no code implementations8 Jul 2021 Barna Pásztor, Ilija Bogunovic, Andreas Krause

Learning in multi-agent systems is highly challenging due to several factors including the non-stationarity introduced by agents' interactions and the combinatorial nature of their state and action spaces.

Gaussian Processes Model-based Reinforcement Learning +2

Contextual Games: Multi-Agent Learning with Side Information

no code implementations NeurIPS 2020 Pier Giuseppe Sessa, Ilija Bogunovic, Andreas Krause, Maryam Kamgarpour

We formulate the novel class of contextual games, a type of repeated games driven by contextual information at each round.

Data Summarization via Bilevel Optimization

no code implementations26 Sep 2021 Zalán Borsos, Mojmír Mutný, Marco Tagliasacchi, Andreas Krause

We show the effectiveness of our framework for a wide range of models in various settings, including training non-convex models online and batch active learning.

Active Learning Bilevel Optimization +2

Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes

no code implementations22 Oct 2021 Elvis Nava, Mojmír Mutný, Andreas Krause

In Bayesian Optimization (BO) we study black-box function optimization with noisy point evaluations and Bayesian priors.

Bayesian Optimization Point Processes +1

Misspecified Gaussian Process Bandit Optimization

no code implementations NeurIPS 2021 Ilija Bogunovic, Andreas Krause

Instead, we introduce a \emph{misspecified} kernelized bandit setting where the unknown function can be $\epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS).

Safe non-smooth black-box optimization with application to policy search

no code implementations L4DC 2020 Ilnura Usmanova, Andreas Krause, Maryam Kamgarpour

For safety-critical black-box optimization tasks, observations of the constraints and the objective are often noisy and available only for the feasible points.

Meta-Learning Hypothesis Spaces for Sequential Decision-making

no code implementations1 Feb 2022 Parnian Kassraie, Jonas Rothfuss, Andreas Krause

We demonstrate our approach on the kernelized bandit problem (a. k. a.~Bayesian optimization), where we establish regret bounds competitive with those given the true kernel.

Bayesian Optimization Decision Making +3

A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits

no code implementations3 Feb 2022 Ilija Bogunovic, Zihan Li, Andreas Krause, Jonathan Scarlett

We consider the sequential optimization of an unknown, continuous, and expensive to evaluate reward function, from noisy and adversarially corrupted observed rewards.

Learning Graph Models for Template-Free Retrosynthesis

no code implementations arXiv 2021 Vignesh Ram Somnath, Charlotte Bunne, Connor W. Coley, Andreas Krause, Regina Barzilay

Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule.

Retrosynthesis Single-step retrosynthesis

The Schrödinger Bridge between Gaussian Measures has a Closed Form

no code implementations11 Feb 2022 Charlotte Bunne, Ya-Ping Hsieh, Marco Cuturi, Andreas Krause

The static optimal transport $(\mathrm{OT})$ problem between Gaussians seeks to recover an optimal map, or more generally a coupling, to morph a Gaussian into another.

Gaussian Processes MORPH

Efficient Model-based Multi-agent Reinforcement Learning via Optimistic Equilibrium Computation

no code implementations14 Mar 2022 Pier Giuseppe Sessa, Maryam Kamgarpour, Andreas Krause

We consider model-based multi-agent reinforcement learning, where the environment transition model is unknown and can only be learned via expensive interactions with the environment.

Autonomous Driving Gaussian Processes +3

Gradient-Based Trajectory Optimization With Learned Dynamics

no code implementations9 Apr 2022 Bhavya Sukhija, Nathanael Köhler, Miguel Zamora, Simon Zimmermann, Sebastian Curi, Andreas Krause, Stelian Coros

In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car, and gives good performance in combination with trajectory optimization methods.

Experimental Design for Linear Functionals in Reproducing Kernel Hilbert Spaces

no code implementations26 May 2022 Mojmír Mutný, Andreas Krause

In this work, we investigate the optimal design of experiments for {\em estimation of linear functionals in reproducing kernel Hilbert spaces (RKHSs)}.

Experimental Design

Riemannian stochastic approximation algorithms

no code implementations14 Jun 2022 Mohammad Reza Karimi, Ya-Ping Hsieh, Panayotis Mertikopoulos, Andreas Krause

We examine a wide class of stochastic approximation algorithms for solving (stochastic) nonlinear problems on Riemannian manifolds.

Riemannian optimization

Learning To Cut By Looking Ahead: Cutting Plane Selection via Imitation Learning

no code implementations27 Jun 2022 Max B. Paulus, Giulia Zarpellon, Andreas Krause, Laurent Charlin, Chris J. Maddison

Cutting planes are essential for solving mixed-integer linear problems (MILPs), because they facilitate bound improvements on the optimal solution value.

Imitation Learning

Active Exploration via Experiment Design in Markov Chains

no code implementations29 Jun 2022 Mojmír Mutný, Tadeusz Janik, Andreas Krause

A key challenge in science and engineering is to design experiments to learn about some unknown quantity of interest.

Experimental Design

Graph Neural Network Bandits

no code implementations13 Jul 2022 Parnian Kassraie, Andreas Krause, Ilija Bogunovic

By establishing a novel connection between such kernels and the graph neural tangent kernel (GNTK), we introduce the first GNN confidence bound and use it to design a phased-elimination algorithm with sublinear regret.

Drug Discovery

Meta-Learning Priors for Safe Bayesian Optimization

no code implementations3 Oct 2022 Jonas Rothfuss, Christopher Koenig, Alisa Rupenyan, Andreas Krause

In the presence of unknown safety constraints, it is crucial to choose reliable model hyper-parameters to avoid safety violations.

Bayesian Optimization Meta-Learning +1

Replicable Bandits

no code implementations4 Oct 2022 Hossein Esfandiari, Alkis Kalavasis, Amin Karbasi, Andreas Krause, Vahab Mirrokni, Grigoris Velegkas

Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter.

Multi-Armed Bandits

Movement Penalized Bayesian Optimization with Application to Wind Energy Systems

no code implementations14 Oct 2022 Shyam Sundhar Ramesh, Pier Giuseppe Sessa, Andreas Krause, Ilija Bogunovic

Contextual Bayesian optimization (CBO) is a powerful framework for sequential decision-making given side information, with important applications, e. g., in wind energy systems.

Bayesian Optimization Decision Making

Lifelong Bandit Optimization: No Prior and No Regret

no code implementations27 Oct 2022 Felix Schur, Parnian Kassraie, Jonas Rothfuss, Andreas Krause

Our algorithm can be paired with any kernelized or linear bandit algorithm and guarantees oracle optimal performance, meaning that as more tasks are solved, the regret of LIBO on each task converges to the regret of the bandit algorithm with oracle knowledge of the true kernel.

Instance-Dependent Generalization Bounds via Optimal Transport

no code implementations2 Nov 2022 Songyan Hou, Parnian Kassraie, Anastasis Kratsios, Andreas Krause, Jonas Rothfuss

Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.

Generalization Bounds Inductive Bias

Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior: From Theory to Practice

no code implementations14 Nov 2022 Jonas Rothfuss, Martin Josifoski, Vincent Fortuin, Andreas Krause

Meta-Learning aims to speed up the learning process on new tasks by acquiring useful inductive biases from datasets of related learning tasks.

Gaussian Processes Meta-Learning +1

Near-optimal Policy Identification in Active Reinforcement Learning

no code implementations19 Dec 2022 Xiang Li, Viraj Mehta, Johannes Kirschner, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, Ilija Bogunovic

Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces.

Bayesian Optimization reinforcement-learning +1

Linear Partial Monitoring for Sequential Decision-Making: Algorithms, Regret Bounds and Applications

no code implementations7 Feb 2023 Johannes Kirschner, Tor Lattimore, Andreas Krause

Partial monitoring is an expressive framework for sequential decision-making with an abundance of applications, including graph-structured and dueling bandits, dynamic pricing and transductive feedback models.

Decision Making

Hallucinated Adversarial Control for Conservative Offline Policy Evaluation

1 code implementation2 Mar 2023 Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie, Andreas Krause

We study the problem of conservative off-policy evaluation (COPE) where given an offline dataset of environment interactions, collected by other agents, we seek to obtain a (tight) lower bound on a policy's performance.

Continuous Control Off-policy evaluation +1

Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement

no code implementations9 May 2023 Yunke Ao, Hooman Esfandiari, Fabio Carrillo, Yarden As, Mazda Farshad, Benjamin F. Grewe, Andreas Krause, Philipp Fuernstahl

Spinal fusion surgery requires highly accurate implantation of pedicle screw implants, which must be conducted in critical proximity to vital structures with a limited view of anatomy.

Anatomy

A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree Spectral Bias of Neural Networks

no code implementations16 May 2023 Ali Gorji, Andisheh Amrollahi, Andreas Krause

We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.

Provably Learning Nash Policies in Constrained Markov Potential Games

no code implementations13 Jun 2023 Pragnya Alatur, Giorgia Ramponi, Niao He, Andreas Krause

Multi-agent reinforcement learning (MARL) addresses sequential decision-making problems with multiple agents, where each agent optimizes its own objective.

Decision Making Multi-agent Reinforcement Learning +1

Safe Risk-averse Bayesian Optimization for Controller Tuning

no code implementations23 Jun 2023 Christopher Koenig, Miks Ozols, Anastasia Makarova, Efe C. Balta, Andreas Krause, Alisa Rupenyan

Controller tuning and parameter optimization are crucial in system design to improve both the controller and underlying system performance.

Bayesian Optimization

Model-based Causal Bayesian Optimization

no code implementations31 Jul 2023 Scott Sussex, Pier Giuseppe Sessa, Anastasiia Makarova, Andreas Krause

We formalize this generalization of CBO as Adversarial Causal Bayesian Optimization (ACBO) and introduce the first algorithm for ACBO with bounded regret: Causal Bayesian Optimization with Multiplicative Weights (CBO-MW).

Bayesian Optimization counterfactual

Distributionally Robust Model-based Reinforcement Learning with Large State Spaces

no code implementations5 Sep 2023 Shyam Sundhar Ramesh, Pier Giuseppe Sessa, Yifan Hu, Andreas Krause, Ilija Bogunovic

Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.

Gaussian Processes Model-based Reinforcement Learning +1

Data-Efficient Task Generalization via Probabilistic Model-based Meta Reinforcement Learning

no code implementations13 Nov 2023 Arjun Bhardwaj, Jonas Rothfuss, Bhavya Sukhija, Yarden As, Marco Hutter, Stelian Coros, Andreas Krause

We introduce PACOH-RL, a novel model-based Meta-Reinforcement Learning (Meta-RL) algorithm designed to efficiently adapt control policies to changing dynamics.

Meta-Learning Meta Reinforcement Learning +2

Sinkhorn Flow: A Continuous-Time Framework for Understanding and Generalizing the Sinkhorn Algorithm

no code implementations28 Nov 2023 Mohammad Reza Karimi, Ya-Ping Hsieh, Andreas Krause

Many problems in machine learning can be formulated as solving entropy-regularized optimal transport on the space of probability measures.

EquiReact: An equivariant neural network for chemical reactions

no code implementations13 Dec 2023 Puck van Gerwen, Ksenia R. Briling, Charlotte Bunne, Vignesh Ram Somnath, Ruben Laplaza, Andreas Krause, Clemence Corminboeuf

Equivariant neural networks have considerably improved the accuracy and data-efficiency of predictions of molecular properties.

Property Prediction

Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian Approach

no code implementations16 Jan 2024 Mahrokh Ghoddousi Boroujeni, Andreas Krause, Giancarlo Ferrari Trecate

Personalized federated learning (PFL) goes one step further by adapting the global model to each client, enhancing the model's fit for different clients.

Personalized Federated Learning

Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL

no code implementations8 Feb 2024 Jiawei Huang, Niao He, Andreas Krause

We study the sample complexity of reinforcement learning (RL) in Mean-Field Games (MFGs) with model-based function approximation that requires strategic exploration to find a Nash Equilibrium policy.

Computational Efficiency Reinforcement Learning (RL)

Information-based Transductive Active Learning

no code implementations13 Feb 2024 Jonas Hübotter, Bhavya Sukhija, Lenart Treven, Yarden As, Andreas Krause

We generalize active learning to address real-world settings where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region.

Active Learning Bayesian Optimization +1

A PAC-Bayesian Framework for Optimal Control with Stability Guarantees

1 code implementation26 Mar 2024 Mahrokh Ghoddousi Boroujeni, Clara Lucía Galimberti, Andreas Krause, Giancarlo Ferrari-Trecate

Based on these bounds, we propose a new method for designing optimal controllers, offering a principled way to incorporate prior knowledge into the synthesis process, which aids in improving the control policy and mitigating overfitting.

Generalization Bounds

Neural Contextual Bandits without Regret

1 code implementation7 Jul 2021 Parnian Kassraie, Andreas Krause

Contextual bandits are a rich model for sequential decision making given side information, with important applications, e. g., in recommender systems.

Decision Making Multi-Armed Bandits +1

Sensing Cox Processes via Posterior Sampling and Positive Bases

1 code implementation21 Oct 2021 Mojmír Mutný, Andreas Krause

We study adaptive sensing of Cox point processes, a widely used model from spatial statistics.

Experimental Design Point Processes

MARS: Meta-Learning as Score Matching in the Function Space

1 code implementation24 Oct 2022 Krunoslav Lehman Pavasovic, Jonas Rothfuss, Andreas Krause

To circumvent these issues, we approach meta-learning through the lens of functional Bayesian neural network inference, which views the prior as a stochastic process and performs inference in the function space.

Meta-Learning

Efficiently Learning Fourier Sparse Set Functions

1 code implementation NeurIPS 2019 Andisheh Amrollahi, Amir Zandieh, Michael Kapralov, Andreas Krause

In this paper we consider the problem of efficiently learning set functions that are defined over a ground set of size $n$ and that are sparse (say $k$-sparse) in the Fourier domain.

SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for Gaussian Process Regression with Derivatives

1 code implementation5 Mar 2020 Emmanouil Angelis, Philippe Wenk, Bernhard Schölkopf, Stefan Bauer, Andreas Krause

Gaussian processes are an important regression tool with excellent analytic properties which allow for direct integration of derivative observations.

Gaussian Processes regression

Cost-effective Outbreak Detection in Networks

1 code implementation SIGKDD 2007 Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, Natalie Glance

We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude.

Log Barriers for Safe Black-box Optimization with Application to Safe Reinforcement Learning

2 code implementations21 Jul 2022 Ilnura Usmanova, Yarden As, Maryam Kamgarpour, Andreas Krause

We introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial.

reinforcement-learning Reinforcement Learning (RL) +2

Anytime Model Selection in Linear Bandits

1 code implementation NeurIPS 2023 Parnian Kassraie, Nicolas Emmenegger, Andreas Krause, Aldo Pacchiano

This allows us to develop ALEXP, which has an exponentially improved ($\log M$) dependence on $M$ for its regret.

Model Selection

Practical Coreset Constructions for Machine Learning

2 code implementations19 Mar 2017 Olivier Bachem, Mario Lucic, Andreas Krause

We investigate coresets - succinct, small summaries of large data sets - so that solutions found on the summary are provably competitive with solution found on the full data set.

BIG-bench Machine Learning Clustering +1

Streaming Non-monotone Submodular Maximization: Personalized Video Summarization on the Fly

1 code implementation12 Jun 2017 Baharan Mirzasoleiman, Stefanie Jegelka, Andreas Krause

The need for real time analysis of rapidly producing data streams (e. g., video and image streams) motivated the design of streaming algorithms that can efficiently extract and summarize useful information from massive data "on the fly".

Data Structures and Algorithms Information Retrieval

Online Variance Reduction with Mixtures

1 code implementation29 Mar 2019 Zalán Borsos, Sebastian Curi, Kfir. Y. Levy, Andreas Krause

Adaptive importance sampling for stochastic optimization is a promising approach that offers improved convergence through variance reduction.

Stochastic Optimization

Learning to Play Sequential Games versus Unknown Opponents

1 code implementation NeurIPS 2020 Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause

We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.

Bilevel Optimization

Unbalanced Diffusion Schrödinger Bridge

1 code implementation15 Jun 2023 Matteo Pariset, Ya-Ping Hsieh, Charlotte Bunne, Andreas Krause, Valentin De Bortoli

Schr\"odinger bridges (SBs) provide an elegant framework for modeling the temporal evolution of populations in physical, chemical, or biological systems.

Intrinsic Gaussian Vector Fields on Manifolds

1 code implementation28 Oct 2023 Daniel Robert-Nicoud, Andreas Krause, Viacheslav Borovitskiy

Various applications ranging from robotics to climate science require modeling signals on non-Euclidean domains, such as the sphere.

Uncertainty Quantification

A domain agnostic measure for monitoring and evaluating GANs

1 code implementation NeurIPS 2019 Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Ian Goodfellow, Thomas Hofmann, Andreas Krause

Evaluations are essential for: (i) relative assessment of different models and (ii) monitoring the progress of a single model throughout training.

Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases

3 code implementations1 Oct 2020 Chris Wendler, Andisheh Amrollahi, Bastian Seifert, Andreas Krause, Markus Püschel

Many applications of machine learning on discrete domains, such as learning preference functions in recommender systems or auctions, can be reduced to estimating a set function that is sparse in the Fourier domain.

Recommendation Systems

PopSkipJump: Decision-Based Attack for Probabilistic Classifiers

1 code implementation14 Jun 2021 Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, Andreas Krause

Most current classifiers are vulnerable to adversarial examples, small input perturbations that change the classification output.

Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems

1 code implementation NeurIPS 2021 Andreas Schlaginhaufen, Philippe Wenk, Andreas Krause, Florian Dörfler

To this end, neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed.

GoSafeOpt: Scalable Safe Exploration for Global Optimization of Dynamical Systems

1 code implementation24 Jan 2022 Bhavya Sukhija, Matteo Turchetta, David Lindner, Andreas Krause, Sebastian Trimpe, Dominik Baumann

Learning optimal control policies directly on physical systems is challenging since even a single failure can lead to costly hardware damage.

Safe Exploration

Interactively Learning Preference Constraints in Linear Bandits

1 code implementation10 Jun 2022 David Lindner, Sebastian Tschiatschek, Katja Hofmann, Andreas Krause

We provide an instance-dependent lower bound for constrained linear best-arm identification and show that ACOL's sample complexity matches the lower bound in the worst-case.

Decision Making

Adaptive Sequence Submodularity

1 code implementation NeurIPS 2019 Marko Mitrovic, Ehsan Kazemi, Moran Feldman, Andreas Krause, Amin Karbasi

In many machine learning applications, one needs to interactively select a sequence of items (e. g., recommending movies based on a user's feedback) or make sequential decisions in a certain order (e. g., guiding an agent through a series of states).

Decision Making Link Prediction +1

Structured Variational Inference in Unstable Gaussian Process State Space Models

1 code implementation16 Jul 2019 Silvan Melchior, Sebastian Curi, Felix Berkenkamp, Andreas Krause

Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.

Gaussian Processes Variational Inference

Adaptive Sampling for Stochastic Risk-Averse Learning

1 code implementation NeurIPS 2020 Sebastian Curi, Kfir. Y. Levy, Stefanie Jegelka, Andreas Krause

In high-stakes machine learning applications, it is crucial to not only perform well on average, but also when restricted to difficult examples.

Point Processes

Isotropic Gaussian Processes on Finite Spaces of Graphs

3 code implementations3 Nov 2022 Viacheslav Borovitskiy, Mohammad Reza Karimi, Vignesh Ram Somnath, Andreas Krause

We propose a principled way to define Gaussian process priors on various sets of unweighted graphs: directed or undirected, with or without loops.

Gaussian Processes Molecular Property Prediction +1

Implicit Manifold Gaussian Process Regression

1 code implementation NeurIPS 2023 Bernardo Fichera, Viacheslav Borovitskiy, Andreas Krause, Aude Billard

Gaussian process regression is widely used because of its ability to provide well-calibrated uncertainty estimates and handle small or sparse datasets.

regression

Automatic Termination for Hyperparameter Optimization

1 code implementation16 Apr 2021 Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.

Bayesian Optimization Hyperparameter Optimization

ODIN: ODE-Informed Regression for Parameter and State Inference in Time-Continuous Dynamical Systems

2 code implementations17 Feb 2019 Philippe Wenk, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, Stefan Bauer

Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting.

Gaussian Processes Model Selection +1

Submodular Reinforcement Learning

1 code implementation25 Jul 2023 Manish Prajapat, Mojmír Mutný, Melanie N. Zeilinger, Andreas Krause

In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing returns, i. e., their value decreases in light of similar states visited previously.

reinforcement-learning Reinforcement Learning (RL)

Active Exploration for Inverse Reinforcement Learning

1 code implementation18 Jul 2022 David Lindner, Andreas Krause, Giorgia Ramponi

We propose a novel IRL algorithm: Active exploration for Inverse Reinforcement Learning (AceIRL), which actively explores an unknown environment and expert policy to quickly learn the expert's reward function and identify a good policy.

reinforcement-learning Reinforcement Learning (RL)

Model-based Causal Bayesian Optimization

1 code implementation18 Nov 2022 Scott Sussex, Anastasiia Makarova, Andreas Krause

How should we intervene on an unknown structural equation model to maximize a downstream variable of interest?

Bayesian Optimization

Near-Optimal Multi-Agent Learning for Safe Coverage Control

1 code implementation12 Oct 2022 Manish Prajapat, Matteo Turchetta, Melanie N. Zeilinger, Andreas Krause

In this paper, we aim to efficiently learn the density to approximately solve the coverage problem while preserving the agents' safety.

Navigate Safe Exploration

Supervised Training of Conditional Monge Maps

1 code implementation28 Jun 2022 Charlotte Bunne, Andreas Krause, Marco Cuturi

To account for that context in OT estimation, we introduce CondOT, a multi-task approach to estimate a family of OT maps conditioned on a context variable, using several pairs of measures $\left(\mu_i, \nu_i\right)$ tagged with a context label $c_i$.

DockGame: Cooperative Games for Multimeric Rigid Protein Docking

1 code implementation9 Oct 2023 Vignesh Ram Somnath, Pier Giuseppe Sessa, Maria Rodriguez Martinez, Andreas Krause

Most traditional and deep learning methods for docking have focused mainly on binary docking, following either a search-based, regression-based, or generative modeling paradigm.

Protein Design

Safe Guaranteed Exploration for Non-linear Systems

1 code implementation9 Feb 2024 Manish Prajapat, Johannes Köhler, Matteo Turchetta, Andreas Krause, Melanie N. Zeilinger

Based on this framework we propose an efficient algorithm, SageMPC, SAfe Guaranteed Exploration using Model Predictive Control.

Efficient Exploration Model Predictive Control

Risk-averse Heteroscedastic Bayesian Optimization

1 code implementation NeurIPS 2021 Anastasiia Makarova, Ilnura Usmanova, Ilija Bogunovic, Andreas Krause

We generalize BO to trade mean and input-dependent variance of the objective, both of which we assume to be unknown a priori.

Bayesian Optimization

Online Variance Reduction for Stochastic Optimization

2 code implementations13 Feb 2018 Zalán Borsos, Andreas Krause, Kfir. Y. Levy

Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data.

Stochastic Optimization

Online Active Model Selection for Pre-trained Classifiers

1 code implementation19 Oct 2020 Mohammad Reza Karimi, Nezihe Merve Gürel, Bojan Karlaš, Johannes Rausch, Ce Zhang, Andreas Krause

Given $k$ pre-trained classifiers and a stream of unlabeled data examples, how can we actively decide when to query a label so that we can distinguish the best model from the rest while making a small number of queries?

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.