You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • NeurIPS 2021 • Vignesh Ram Somnath, Charlotte Bunne, Andreas Krause

This paper introduces a multi-scale graph construction of a protein -- HoloProt -- connecting surface to structure and sequence.

no code implementations • 26 Mar 2022 • Johannes Kirschner, Mojmir Mutný, Andreas Krause, Jaime Coello de Portugal, Nicole Hiller, Jochem Snuverink

Tuning machine parameters of particle accelerators is a repetitive and time-consuming task, that is challenging to automate.

no code implementations • 14 Mar 2022 • Pier Giuseppe Sessa, Maryam Kamgarpour, Andreas Krause

We consider model-based multi-agent reinforcement learning, where the environment transition model is unknown and can only be learned via expensive interactions with the environment.

no code implementations • 11 Feb 2022 • Charlotte Bunne, Ya-Ping Hsieh, Marco Cuturi, Andreas Krause

Our goal is to rely on Gaussian approximations of the data to provide the reference stochastic process needed to estimate SB.

no code implementations • 3 Feb 2022 • Ilija Bogunovic, Zihan Li, Andreas Krause, Jonathan Scarlett

We consider the sequential optimization of an unknown, continuous, and expensive to evaluate reward function, from noisy and adversarially corrupted observed rewards.

no code implementations • 1 Feb 2022 • Parnian Kassraie, Jonas Rothfuss, Andreas Krause

We demonstrate our approach on the kernelized bandit problem (a. k. a.~Bayesian optimization), where we establish regret bounds competitive with those given the true kernel.

1 code implementation • ICLR 2022 • Yarden As, Ilnura Usmanova, Sebastian Curi, Andreas Krause

Improving sample-efficiency and safety are crucial challenges when deploying reinforcement learning in high-stakes real world applications.

no code implementations • 24 Jan 2022 • Bhavya Sukhija, Matteo Turchetta, David Lindner, Andreas Krause, Sebastian Trimpe, Dominik Baumann

Learning optimal control policies directly on physical systems is challenging since even a single failure can lead to costly hardware damage.

1 code implementation • ICLR 2022 • Octavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi Jaakkola, Andreas Krause

Protein complex formation is a central problem in biology, being involved in most of the cell's processes, and essential for applications, e. g. drug design or protein engineering.

no code implementations • NeurIPS 2021 • Ilija Bogunovic, Andreas Krause

Instead, we introduce a \emph{misspecified} kernelized bandit setting where the unknown function can be $\epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS).

1 code implementation • NeurIPS 2021 • Anastasiia Makarova, Ilnura Usmanova, Ilija Bogunovic, Andreas Krause

We generalize BO to trade mean and input-dependent variance of the objective, both of which we assume to be unknown a priori.

no code implementations • NeurIPS 2021 • Andreas Schlaginhaufen, Philippe Wenk, Andreas Krause, Florian Dörfler

To this end, neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed.

no code implementations • 22 Oct 2021 • Elvis Nava, Mojmír Mutný, Andreas Krause

In Bayesian Optimization (BO) we study black-box function optimization with noisy point evaluations and Bayesian priors.

1 code implementation • 21 Oct 2021 • Mojmír Mutný, Andreas Krause

We study adaptive sensing of Cox point processes, a widely used model from spatial statistics.

1 code implementation • NeurIPS 2021 • Jonas Gehring, Gabriel Synnaeve, Andreas Krause, Nicolas Usunier

We alleviate the need for prior knowledge by proposing a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.

no code implementations • 29 Sep 2021 • Mathieu Chevalley, Charlotte Bunne, Andreas Krause, Stefan Bauer

Learning representations that capture the underlying data generating process is akey problem for data efficient and robust use of neural networks.

no code implementations • 26 Sep 2021 • Zalán Borsos, Mojmír Mutný, Marco Tagliasacchi, Andreas Krause

We show the effectiveness of our framework for a wide range of models in various settings, including training non-convex models online and batch active learning.

no code implementations • NeurIPS 2020 • Pier Giuseppe Sessa, Ilija Bogunovic, Andreas Krause, Maryam Kamgarpour

We formulate the novel class of contextual games, a type of repeated games driven by contextual information at each round.

no code implementations • 8 Jul 2021 • Barna Pasztor, Ilija Bogunovic, Andreas Krause

We tackle systems with a huge population of interacting agents (e. g., swarms) via Mean-Field Control (MFC).

1 code implementation • 7 Jul 2021 • Parnian Kassraie, Andreas Krause

Contextual bandits are a rich model for sequential decision making given side information, with important applications, e. g., in recommender systems.

1 code implementation • NeurIPS 2021 • Lenart Treven, Philippe Wenk, Florian Dörfler, Andreas Krause

Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification.

1 code implementation • 14 Jun 2021 • Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, Andreas Krause

Most current classifiers are vulnerable to adversarial examples, small input perturbations that change the classification output.

1 code implementation • 11 Jun 2021 • Charlotte Bunne, Laetitia Meng-Papaxanthos, Andreas Krause, Marco Cuturi

We propose to model these trajectories as collective realizations of a causal Jordan-Kinderlehrer-Otto (JKO) flow of measures: The JKO scheme posits that the new configuration taken by a population at time $t+1$ is one that trades off an improvement, in the sense that it decreases an energy, while remaining close (in Wasserstein distance) to the previous configuration observed at $t$.

2 code implementations • NeurIPS 2021 • Tobias Sutter, Andreas Krause, Daniel Kuhn

Training models that perform well under distribution shifts is a central challenge in machine learning.

no code implementations • NeurIPS 2021 • Jonas Rothfuss, Dominique Heyn, Jinfan Chen, Andreas Krause

When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks.

no code implementations • ICLR 2022 • Yatao Bian, Yu Rong, Tingyang Xu, Jiaxiang Wu, Andreas Krause, Junzhou Huang

By running fixed point iteration for multiple steps, we achieve a trajectory of the valuations, among which we define the valuation with the best conceivable decoupling error as the Variational Index.

no code implementations • arXiv 2021 • Vignesh Ram Somnath, Charlotte Bunne, Connor W. Coley, Andreas Krause, Regina Barzilay

Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule.

Ranked #4 on Single-step retrosynthesis on USPTO-50k

1 code implementation • 2 Jun 2021 • David Lindner, Hoda Heidari, Andreas Krause

To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm.

1 code implementation • ICCV 2021 • Mikhail Usvyatsov, Anastasia Makarova, Rafael Ballester-Ripoll, Maxim Rakhuba, Andreas Krause, Konrad Schindler

We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking at a fraction of their entries only.

1 code implementation • NeurIPS 2021 • Scott Sussex, Andreas Krause, Caroline Uhler

Causal structure learning is a key problem in many domains.

no code implementations • 25 May 2021 • Johannes Kirschner, Andreas Krause

We consider Bayesian optimization in settings where observations can be adversarially biased, for example by an uncontrolled hidden confounder.

1 code implementation • NeurIPS 2021 • Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause

In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation.

no code implementations • 21 May 2021 • Andreas Krause

I demonstrate that with the market return determined by the equilibrium returns of the CAPM, expected returns of an asset are affected by the risks of all assets jointly.

1 code implementation • NeurIPS 2021 • Manuel Wüthrich, Bernhard Schölkopf, Andreas Krause

These regret bounds illuminate the relationship between the number of evaluations, the domain size (i. e. cardinality of finite domains / Lipschitz constant of the covariance function in continuous domains), and the optimality of the retrieved function value.

no code implementations • 16 Apr 2021 • Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

Bayesian optimization (BO) is a widely popular approach for the hyperparameter optimization (HPO) of machine learning algorithms.

no code implementations • 18 Mar 2021 • Sebastian Curi, Ilija Bogunovic, Andreas Krause

In real-world tasks, reinforcement learning (RL) agents frequently encounter situations that are not present during training time.

1 code implementation • NeurIPS 2021 • David Lindner, Matteo Turchetta, Sebastian Tschiatschek, Kamil Ciosek, Andreas Krause

For many reinforcement learning (RL) applications, specifying a reward is difficult.

1 code implementation • ICLR 2021 • Núria Armengol Urpí, Sebastian Curi, Andreas Krause

We demonstrate empirically that in the presence of natural distribution-shifts, O-RAAC learns policies with good average performance.

no code implementations • 21 Jan 2021 • Marc Jourdan, Mojmír Mutný, Johannes Kirschner, Andreas Krause

Combinatorial bandits with semi-bandit feedback generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.

no code implementations • 19 Jan 2021 • Christopher König, Matteo Turchetta, John Lygeros, Alisa Rupenyan, Andreas Krause

Thus, our approach builds on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization.

no code implementations • 1 Jan 2021 • Jonas Rothfuss, Martin Josifoski, Andreas Krause

Bayesian deep learning is a promising approach towards improved uncertainty quantification and sample efficiency.

no code implementations • 21 Oct 2020 • Joan Bas-Serrano, Sebastian Curi, Andreas Krause, Gergely Neu

We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.

1 code implementation • 19 Oct 2020 • Zalán Borsos, Marco Tagliasacchi, Andreas Krause

Active learning is an effective technique for reducing the labeling cost by improving data efficiency.

1 code implementation • 19 Oct 2020 • Mohammad Reza Karimi, Nezihe Merve Gürel, Bojan Karlaš, Johannes Rausch, Ce Zhang, Andreas Krause

Given $k$ pre-trained classifiers and a stream of unlabeled data examples, how can we actively decide when to query a label so that we can distinguish the best model from the rest while making a small number of queries?

5 code implementations • ICLR 2021 • Max B. Paulus, Chris J. Maddison, Andreas Krause

Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance.

2 code implementations • 1 Oct 2020 • Chris Wendler, Andisheh Amrollahi, Bastian Seifert, Andreas Krause, Markus Püschel

Many applications of machine learning on discrete domains, such as learning preference functions in recommender systems or auctions, can be reduced to estimating a set function that is sparse in the Fourier domain.

no code implementations • NeurIPS 2020 • Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause

We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.

no code implementations • 7 Jul 2020 • Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett

We consider a stochastic linear bandit problem in which the rewards are not only subject to random noise, but also adversarial attacks subject to a suitable budget $C$ (i. e., an upper bound on the sum of corruption magnitudes across the time horizon).

no code implementations • 24 Jun 2020 • Yatao Bian, Joachim M. Buhmann, Andreas Krause

We start by a thorough characterization of the class of continuous submodular functions, and show that continuous submodularity is equivalent to a weak version of the diminishing returns (DR) property.

1 code implementation • NeurIPS 2020 • Matteo Turchetta, Andrey Kolobov, Shital Shah, Andreas Krause, Alekh Agarwal

In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.

no code implementations • 19 Jun 2020 • Lenart Treven, Sebastian Curi, Mojmir Mutny, Andreas Krause

The principal task to control dynamical systems is to ensure their stability.

1 code implementation • NeurIPS 2020 • Max B. Paulus, Dami Choi, Daniel Tarlow, Andreas Krause, Chris J. Maddison

The Gumbel-Max trick is the basis of many relaxed gradient estimators.

1 code implementation • NeurIPS 2020 • Sebastian Curi, Felix Berkenkamp, Andreas Krause

Based on this theoretical foundation, we show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models.

no code implementations • NeurIPS 2021 • Vignesh Ram Somnath, Charlotte Bunne, Connor W. Coley, Andreas Krause, Regina Barzilay

Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule.

no code implementations • L4DC 2020 • Sebastian Curi, Silvan Melchior, Felix Berkenkamp, Andreas Krause

Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.

no code implementations • L4DC 2020 • Ilnura Usmanova, Andreas Krause, Maryam Kamgarpour

For safety-critical black-box optimization tasks, observations of the constraints and the objective are often noisy and available only for the feasible points.

1 code implementation • NeurIPS 2020 • Zalán Borsos, Mojmír Mutný, Andreas Krause

Coresets are small data summaries that are sufficient for model training.

no code implementations • ICML 2020 • Aytunc Sahin, Yatao Bian, Joachim M. Buhmann, Andreas Krause

Submodular functions have been studied extensively in machine learning and data mining.

1 code implementation • 2 Apr 2020 • Ankit Dhall, Anastasia Makarova, Octavian Ganea, Dario Pavllo, Michael Greeff, Andreas Krause

Image classification has been studied extensively, but there has been limited work in using unconventional, external guidance other than traditional image-label pairs for training.

1 code implementation • 5 Mar 2020 • Emmanouil Angelis, Philippe Wenk, Bernhard Schölkopf, Stefan Bauer, Andreas Krause

Gaussian processes are an important regression tool with excellent analytic properties which allow for direct integration of derivative observations.

no code implementations • 4 Mar 2020 • Ilija Bogunovic, Andreas Krause, Jonathan Scarlett

We consider the problem of optimizing an unknown (typically non-convex) function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS), based on noisy bandit feedback.

no code implementations • 28 Feb 2020 • Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause

We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.

no code implementations • 25 Feb 2020 • Johannes Kirschner, Tor Lattimore, Andreas Krause

Partial monitoring is a rich framework for sequential decision making under uncertainty that generalizes many well known bandit models, including linear, combinatorial and dueling bandits.

no code implementations • 20 Feb 2020 • Johannes Kirschner, Ilija Bogunovic, Stefanie Jegelka, Andreas Krause

Attaining such robustness is the goal of distributionally robust optimization, which seeks a solution to an optimization problem that is worst-case robust under a specified distributional shift of an uncontrolled covariate.

2 code implementations • ICML Workshop LifelongML 2020 • Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, Andreas Krause

Meta-learning can successfully acquire useful inductive biases from data.

1 code implementation • NeurIPS 2019 • Andisheh Amrollahi, Amir Zandieh, Michael Kapralov, Andreas Krause

In this paper we consider the problem of efficiently learning set functions that are defined over a ground set of size $n$ and that are sparse (say $k$-sparse) in the Fourier domain.

no code implementations • 8 Nov 2019 • Mohammad Yaghini, Andreas Krause, Hoda Heidari

Our family of fairness notions corresponds to a new interpretation of economic models of Equality of Opportunity (EOP), and it includes most existing notions of fairness as special cases.

no code implementations • NeurIPS 2019 • Matteo Turchetta, Felix Berkenkamp, Andreas Krause

Existing algorithms for this problem learn about the safety of all decisions to ensure convergence.

no code implementations • 29 Oct 2019 • Matteo Turchetta, Andreas Krause, Sebastian Trimpe

In reinforcement learning (RL), an autonomous agent learns to perform complex tasks by maximizing an exogenous reward signal while interacting with its environment.

1 code implementation • NeurIPS 2020 • Sebastian Curi, Kfir. Y. Levy, Stefanie Jegelka, Andreas Krause

In high-stakes machine learning applications, it is crucial to not only perform well on average, but also when restricted to difficult examples.

no code implementations • 25 Oct 2019 • Mojmír Mutný, Michał Dereziński, Andreas Krause

We analyze the convergence rate of the randomized Newton-like method introduced by Qu et.

1 code implementation • NeurIPS 2019 • Pier Giuseppe Sessa, Ilija Bogunovic, Maryam Kamgarpour, Andreas Krause

We consider the problem of learning to play a repeated multi-agent game with an unknown reward function.

1 code implementation • 21 Jul 2019 • Jonas Rothfuss, Fabio Ferreira, Simon Boehm, Simon Walther, Maxim Ulrich, Tamim Asfour, Andreas Krause

To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training.

1 code implementation • 16 Jul 2019 • Silvan Melchior, Sebastian Curi, Felix Berkenkamp, Andreas Krause

Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.

no code implementations • 2 Jul 2019 • Erik Daxberger, Anastasia Makarova, Matteo Turchetta, Andreas Krause

However, few methods exist for mixed-variable domains and none of them can handle discrete constraints that arise in many real-world applications.

no code implementations • 28 Jun 2019 • Marcello Fiducioso, Sebastian Curi, Benedikt Schumacher, Markus Gwerder, Andreas Krause

Furthermore, this successful attempt paves the way for further use at different levels of HVAC systems, with promising energy, operational, and commissioning costs savings, and it is a practical demonstration of the positive effects that Artificial Intelligence can have on environmental sustainability.

1 code implementation • 27 Jun 2019 • Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause

We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.

1 code implementation • NeurIPS 2019 • Johannes Kirschner, Andreas Krause

We introduce a stochastic contextual bandit model where at each time step the environment chooses a distribution over a context set and samples the context from this distribution.

no code implementations • 14 May 2019 • Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety.

no code implementations • ICLR 2019 • Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Thomas Hofmann, Andreas Krause

Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem.

1 code implementation • 29 Mar 2019 • Zalán Borsos, Sebastian Curi, Kfir. Y. Levy, Andreas Krause

Adaptive importance sampling for stochastic optimization is a promising approach that offers improved convergence through variance reduction.

1 code implementation • 22 Feb 2019 • Gabriele Abbati, Philippe Wenk, Michael A. Osborne, Andreas Krause, Bernhard Schölkopf, Stefan Bauer

Stochastic differential equations are an important modeling class in many disciplines.

no code implementations • 21 Feb 2019 • Pragnya Alatur, Kfir. Y. Levy, Andreas Krause

We consider a setting where multiple players sequentially choose among a common set of actions (arms).

2 code implementations • 17 Feb 2019 • Philippe Wenk, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, Stefan Bauer

Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting.

1 code implementation • NeurIPS 2019 • Marko Mitrovic, Ehsan Kazemi, Moran Feldman, Andreas Krause, Amin Karbasi

In many machine learning applications, one needs to interactively select a sequence of items (e. g., recommending movies based on a user's feedback) or make sequential decisions in a certain order (e. g., guiding an agent through a series of states).

2 code implementations • 8 Feb 2019 • Johannes Kirschner, Mojmír Mutný, Nicole Hiller, Rasmus Ischebeck, Andreas Krause

In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently.

no code implementations • 10 Jan 2019 • Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

In this paper, we present the first BO algorithm that is provably no-regret and converges to the optimum without knowledge of the hyperparameters.

1 code implementation • ICLR 2019 • Nikolay Nikolov, Johannes Kirschner, Felix Berkenkamp, Andreas Krause

Efficient exploration remains a major challenge for reinforcement learning.

no code implementations • NeurIPS 2018 • Josip Djolonga, Stefanie Jegelka, Andreas Krause

Submodular maximization problems appear in several areas of machine learning and data science, as many useful modelling concepts such as diversity and coverage satisfy this natural diminishing returns property.

no code implementations • NeurIPS 2018 • Mojmir Mutny, Andreas Krause

We develop an efficient and provably no-regret Bayesian optimization (BO) algorithm for optimization of black-box functions in high dimensions.

no code implementations • 13 Nov 2018 • Robin Spiess, Felix Berkenkamp, Jan Poland, Andreas Krause

In this paper, we present a deep learning approach that uses images of the sky to compensate power fluctuations predictively and reduces battery stress.

1 code implementation • NeurIPS 2019 • Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Ian Goodfellow, Thomas Hofmann, Andreas Krause

Evaluations are essential for: (i) relative assessment of different models and (ii) monitoring the progress of a single model throughout training.

no code implementations • 10 Sep 2018 • Hoda Heidari, Michele Loi, Krishna P. Gummadi, Andreas Krause

In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness.

1 code implementation • 2 Aug 2018 • Spencer M. Richards, Felix Berkenkamp, Andreas Krause

We demonstrate our method by learning the safe region of attraction for a simulated inverted pendulum.

no code implementations • 4 Jul 2018 • Alkis Gotovos, Hamed Hassani, Andreas Krause, Stefanie Jegelka

We consider the problem of inference in discrete probabilistic models, that is, distributions over subsets of a finite ground set.

no code implementations • 19 Jun 2018 • Sebastian Curi, Kfir. Y. Levy, Andreas Krause

To this end, we introduce a novel estimation algorithm that explicitly trades off bias and variance to optimally reduce the overall estimation error.

no code implementations • NeurIPS 2018 • Hoda Heidari, Claudio Ferrari, Krishna P. Gummadi, Andreas Krause

We draw attention to an important, yet largely overlooked aspect of evaluating fairness for automated decision making systems---namely risk and welfare considerations.

no code implementations • NeurIPS 2019 • Anette Hunziker, Yuxin Chen, Oisin Mac Aodha, Manuel Gomez Rodriguez, Andreas Krause, Pietro Perona, Yisong Yue, Adish Singla

Our framework is both generic, allowing the design of teaching schedules for different memory models, and also interactive, allowing the teacher to adapt the schedule to the underlying forgetting mechanisms of the learner.

no code implementations • 19 May 2018 • An Bian, Joachim M. Buhmann, Andreas Krause

Mean field inference in probabilistic models is generally a highly nonconvex problem.

3 code implementations • 12 Apr 2018 • Philippe Wenk, Alkis Gotovos, Stefan Bauer, Nico Gorbach, Andreas Krause, Joachim M. Buhmann

Parameter identification and comparison of dynamical systems is a challenging task in many fields.

1 code implementation • 22 Mar 2018 • Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Andreas Krause

However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications.

no code implementations • 5 Mar 2018 • Sebastian Tschiatschek, Aytunc Sahin, Andreas Krause

We consider learning of submodular functions from data.

2 code implementations • 13 Feb 2018 • Zalán Borsos, Andreas Krause, Kfir. Y. Levy

Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data.

no code implementations • 29 Jan 2018 • Johannes Kirschner, Andreas Krause

In the stochastic bandit problem, the goal is to maximize an unknown function via a sequence of noisy evaluations.

no code implementations • NeurIPS 2017 • Josip Djolonga, Andreas Krause

In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible.

no code implementations • NeurIPS 2017 • Lin Chen, Andreas Krause, Amin Karbasi

We then receive a noisy feedback about the utility of the action (e. g., ratings) which we model as a submodular function over the context-action space.

no code implementations • 24 Nov 2017 • Sebastian Tschiatschek, Adish Singla, Manuel Gomez Rodriguez, Arpit Merchant, Andreas Krause

The main objective of our work is to minimize the spread of misinformation by stopping the propagation of fake news in the network.

Social and Information Networks

no code implementations • 17 Nov 2017 • Christoph Hirnschall, Adish Singla, Sebastian Tschiatschek, Andreas Krause

We provide formal guarantees on the performance of our algorithm and test the viability of our approach in a user study with data of apartments on Airbnb.

no code implementations • NeurIPS 2017 • Mohammad Reza Karimi, Mario Lucic, Hamed Hassani, Andreas Krause

By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions.

1 code implementation • NeurIPS 2017 • An Bian, Kfir. Y. Levy, Andreas Krause, Joachim M. Buhmann

Concretely, we first devise a "two-phase" algorithm with $1/4$ approximation guarantee.

no code implementations • 4 Sep 2017 • Josip Djolonga, Andreas Krause

Recently, there has been a growing interest in the problem of learning rich implicit models - those from which we can sample, but can not evaluate their density.

no code implementations • ICML 2017 • Baharan Mirzasoleiman, Amin Karbasi, Andreas Krause

How can we summarize a dynamic data stream when elements selected for the summary can be deleted at any time?

no code implementations • ICML 2017 • Marko Mitrovic, Mark Bun, Andreas Krause, Amin Karbasi

Many data summarization applications are captured by the general framework of submodular maximization.

no code implementations • ICML 2017 • Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi

As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e. g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set.

no code implementations • ICML 2017 • Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are unbounded.

no code implementations • ICML 2017 • Olivier Bachem, Mario Lucic, Andreas Krause

The k-Means++ algorithm is the state of the art algorithm to solve k-Means clustering problems as the computed clusterings are O(log k) competitive in expectation.

1 code implementation • 12 Jun 2017 • Baharan Mirzasoleiman, Stefanie Jegelka, Andreas Krause

The need for real time analysis of rapidly producing data streams (e. g., video and image streams) motivated the design of streaming algorithms that can efficiently extract and summarize useful information from massive data "on the fly".

Data Structures and Algorithms Information Retrieval

no code implementations • ICLR 2018 • Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Thomas Hofmann, Andreas Krause

We consider the problem of training generative models with a Generative Adversarial Network (GAN).

1 code implementation • NeurIPS 2017 • Felix Berkenkamp, Matteo Turchetta, Angela P. Schoellig, Andreas Krause

Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data.

no code implementations • 23 Mar 2017 • Mario Lucic, Matthew Faulkner, Andreas Krause, Dan Feldman

In this work we show how to construct coresets for mixtures of Gaussians.

2 code implementations • 19 Mar 2017 • Olivier Bachem, Mario Lucic, Andreas Krause

We investigate coresets - succinct, small summaries of large data sets - so that solutions found on the summary are provably competitive with solution found on the full data set.

no code implementations • 16 Mar 2017 • Yuxin Chen, Jean-Michel Renders, Morteza Haghir Chehreghani, Andreas Krause

We consider the optimal value of information (VoI) problem, where the goal is to sequentially select a set of tests with a minimal cost, so that one can efficiently make the best decision based on the observed outcomes.

1 code implementation • ICML 2017 • Andrew An Bian, Joachim M. Buhmann, Andreas Krause, Sebastian Tschiatschek

Our guarantees are characterized by a combination of the (generalized) curvature $\alpha$ and the submodularity ratio $\gamma$.

no code implementations • 3 Mar 2017 • Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P. Schoellig, Andreas Krause, Stefan Schaal, Sebastian Trimpe

In practice, the parameters of control policies are often tuned manually.

1 code implementation • 27 Feb 2017 • Olivier Bachem, Mario Lucic, Andreas Krause

As such, they have been successfully used to scale up clustering models to massive data sets.

no code implementations • 27 Feb 2017 • Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are *unbounded*.

no code implementations • 16 Feb 2017 • Adish Singla, Hamed Hassani, Andreas Krause

In our setting, the feedback at any time $t$ is limited in a sense that it is only available to the expert $i^t$ that has been selected by the central algorithm (forecaster), \emph{i. e.}, only the expert $i^t$ receives feedback from the environment and gets to learn at time $t$.

no code implementations • 9 Feb 2017 • Christoph Hirnschall, Adish Singla, Sebastian Tschiatschek, Andreas Krause

We study an online multi-task learning setting, in which instances of related tasks arrive sequentially, and are handled by task-specific online learners.

no code implementations • NeurIPS 2016 • Olivier Bachem, Mario Lucic, Hamed Hassani, Andreas Krause

Seeding - the task of finding initial cluster centers - is critical in obtaining high-quality clusterings for k-Means.

no code implementations • NeurIPS 2016 • Josip Djolonga, Stefanie Jegelka, Sebastian Tschiatschek, Andreas Krause

We study a rich family of distributions that capture variable interactions significantly more expressive than those representable with low-treewidth or pairwise graphical models, or log-supermodular models.

no code implementations • NeurIPS 2016 • Josip Djolonga, Sebastian Tschiatschek, Andreas Krause

We consider the problem of variational inference in probabilistic models with both log-submodular and log-supermodular higher-order potentials.

no code implementations • NeurIPS 2016 • Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher

We present a new algorithm, truncated variance reduction (TruVaR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion.

no code implementations • 17 Jun 2016 • Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, Andreas Krause

Submodular continuous functions are a category of (generally) non-convex/non-concave functions with a wide spectrum of applications.

1 code implementation • NeurIPS 2016 • Matteo Turchetta, Felix Berkenkamp, Andreas Krause

We define safety in terms of an, a priori unknown, safety constraint that depends on states and actions.

no code implementations • 31 May 2016 • Mario Lucic, Olivier Bachem, Morteza Zadimoghaddam, Andreas Krause

A variety of large-scale machine learning problems can be cast as instances of constrained submodular maximization.

no code implementations • 24 May 2016 • Yuxin Chen, S. Hamed Hassani, Andreas Krause

We consider the Bayesian active learning and experimental design problem, where the goal is to learn the value of some unknown target variable through a sequence of informative, noisy tests.

no code implementations • 23 May 2016 • Adish Singla, Sebastian Tschiatschek, Andreas Krause

We propose an active learning algorithm that substantially reduces this sample complexity by exploiting the structural constraints on the version space of hemimetrics.

no code implementations • 2 May 2016 • Mario Lucic, Olivier Bachem, Andreas Krause

Outliers are ubiquitous in modern data sets.

no code implementations • 2 May 2016 • Mario Lucic, Mesrob I. Ohannessian, Amin Karbasi, Andreas Krause

Using k-means clustering as a prototypical unsupervised learning problem, we show how we can strategically summarize the data (control space) in order to trade off risk and time when data is generated by a probabilistic model.

no code implementations • 2 May 2016 • Hemant Tyagi, Anastasios Kyrillidis, Bernd Gärtner, Andreas Krause

A function $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is a Sparse Additive Model (SPAM), if it is of the form $f(\mathbf{x}) = \sum_{l \in \mathcal{S}}\phi_{l}(x_l)$ where $\mathcal{S} \subset [d]$, $|\mathcal{S}| \ll d$.

no code implementations • 18 Apr 2016 • Hemant Tyagi, Anastasios Kyrillidis, Bernd Gärtner, Andreas Krause

For some $\mathcal{S}_1 \subset [d], \mathcal{S}_2 \subset {[d] \choose 2}$, the function $f$ is assumed to be of the form: $$f(\mathbf{x}) = \sum_{p \in \mathcal{S}_1}\phi_{p} (x_p) + \sum_{(l, l^{\prime}) \in \mathcal{S}_2}\phi_{(l, l^{\prime})} (x_{l}, x_{l^{\prime}}).$$ Assuming $\phi_{p},\phi_{(l, l^{\prime})}$, $\mathcal{S}_1$ and, $\mathcal{S}_2$ to be unknown, we provide a randomized algorithm that queries $f$ and exactly recovers $\mathcal{S}_1,\mathcal{S}_2$.

2 code implementations • 15 Mar 2016 • Felix Berkenkamp, Riccardo Moriconi, Angela P. Schoellig, Andreas Krause

The ROA is typically estimated based on a model of the system.

Systems and Control

3 code implementations • 14 Feb 2016 • Felix Berkenkamp, Andreas Krause, Angela P. Schoellig

While an initial guess for the parameters may be obtained from dynamic models of the robot, parameters are usually tuned manually on the real system to achieve the best performance.

no code implementations • 2 Feb 2016 • Eric Schulz, Quentin J. M. Huys, Dominik R. Bach, Maarten Speekenbrink, Andreas Krause

Exploration-exploitation of functions, that is learning and optimizing a mapping between inputs and expected outputs, is ubiquitous to many real world situations.

no code implementations • NeurIPS 2015 • Alkis Gotovos, Hamed Hassani, Andreas Krause

Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively.

no code implementations • ICCV 2015 • Jian Zhang, Josip Djolonga, Andreas Krause

Higher-order models have been shown to be very useful for a plethora of computer vision tasks.

no code implementations • NeurIPS 2015 • Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, Andreas Krause

In this paper, we formalize this challenge as a submodular cover problem.

no code implementations • 23 Nov 2015 • Adish Singla, Sebastian Tschiatschek, Andreas Krause

When the underlying submodular function is unknown, users' feedback can provide noisy evaluations of the function that we seek to maximize.

3 code implementations • 3 Sep 2015 • Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters.

Robotics

no code implementations • 21 Aug 2015 • Mario Lucic, Olivier Bachem, Andreas Krause

We propose a single, practical algorithm to construct strong coresets for a large class of hard and soft clustering problems based on Bregman divergences.

no code implementations • 12 Aug 2015 • Adish Singla, Eric Horvitz, Pushmeet Kohli, Andreas Krause

Furthermore, we consider an embedding of the tasks and workers in an underlying graph that may arise from task similarities or social ties, and that can provide additional side-observations for faster learning.

no code implementations • 8 Aug 2015 • Besmira Nushi, Adish Singla, Anja Gruenheid, Erfan Zamanian, Andreas Krause, Donald Kossmann

Based on this intuitive idea, we introduce the Access Path Model (APM), a novel crowd model that leverages the notion of access paths as an alternative way of retrieving information.

no code implementations • 2 Jun 2015 • Hastagiri P. Vanchinathan, Andreas Marfurt, Charles-Antoine Robelin, Donald Kossmann, Andreas Krause

Given a budget on the cumulative cost of the selected items, how can we pick a subset of maximal value?

no code implementations • 27 Apr 2015 • Yuyin Sun, Adish Singla, Dieter Fox, Andreas Krause

Hierarchies of concepts are useful in many applications from navigation to organization of objects.

no code implementations • 24 Apr 2015 • Adish Singla, Eric Horvitz, Pushmeet Kohli, Ryen White, Andreas Krause

How should we gather information in a network, where each node's visibility is limited to its local neighborhood?

no code implementations • 23 Feb 2015 • Josip Djolonga, Andreas Krause

We consider the problem of approximate Bayesian inference in log-supermodular models.

no code implementations • NeurIPS 2014 • Hemant Tyagi, Bernd Gärtner, Andreas Krause

We consider the problem of learning sparse additive models, i. e., functions of the form: $f(\vecx) = \sum_{l \in S} \phi_{l}(x_l)$, $\vecx \in \matR^d$ from point queries of $f$.

no code implementations • NeurIPS 2014 • Josip Djolonga, Andreas Krause

Submodular optimization has found many applications in machine learning and beyond.

no code implementations • NeurIPS 2014 • Hastagiri P. Vanchinathan, Gábor Bartók, Andreas Krause

In every round, the learner suffers some loss and receives some feedback based on the action and the outcome.

no code implementations • 3 Nov 2014 • Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause

Such problems can often be reduced to maximizing a submodular set function subject to various constraints.

no code implementations • 28 Sep 2014 • Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, Andreas Krause

Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice?

no code implementations • 3 Jul 2014 • Daniel Golovin, Andreas Krause, Matthew Streeter

How should we dynamically rank information sources to maximize the value of the ranking?

no code implementations • 24 Feb 2014 • Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, J. Andrew Bagnell, Siddhartha Srinivasa

Instead of minimizing uncertainty per se, we consider a set of overlapping decision regions of these hypotheses.

no code implementations • 10 Feb 2014 • Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, Andreas Krause

How should we present training examples to learners to teach them classification rules?

no code implementations • 16 Jan 2014 • Andreas Krause, Eric Horvitz

We introduce and explore an economics of privacy in personalization, where people can opt to share personal information, in a standing or on-demand manner, in return for expected enhancements in the quality of an online service.

no code implementations • 15 Jan 2014 • Andreas Krause, Carlos Guestrin

In a sensor network, for example, it is important to select the subset of sensors that is expected to provide the strongest reduction in uncertainty.

no code implementations • 15 Jan 2014 • Amarjeet Singh, Andreas Krause, Carlos Guestrin, William J. Kaiser

In this paper, we present an efficient approach for near-optimally solving the NP-hard optimization problem of planning such informative paths.

no code implementations • NeurIPS 2013 • Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause

Such problems can often be reduced to maximizing a submodular set function subject to cardinality constraints.

no code implementations • NeurIPS 2013 • Josip Djolonga, Andreas Krause, Volkan Cevher

Many applications in machine learning require optimizing unknown functions defined over a high-dimensional space from noisy samples that are expensive to obtain.

no code implementations • 19 Aug 2013 • Adish Singla, Andreas Krause

Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications.

no code implementations • NeurIPS 2011 • Ryan G. Gomes, Peter Welinder, Andreas Krause, Pietro Perona

Is it possible to crowdsource categorization?

no code implementations • NeurIPS 2011 • Dan Feldman, Matthew Faulkner, Andreas Krause

In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations.

no code implementations • NeurIPS 2011 • Andreas Krause, Cheng S. Ong

How should we design experiments to maximize performance of a complex system, taking into account uncontrollable environmental conditions?

no code implementations • NeurIPS 2010 • Andreas Krause, Pietro Perona, Ryan G. Gomes

We present a framework that simultaneously clusters the data and trains a discriminative classifier.

no code implementations • NeurIPS 2010 • Peter Stobbe, Andreas Krause

Decomposable submodular functions are those that can be represented as sums of concave functions applied to linear functions.

no code implementations • NeurIPS 2010 • Daniel Golovin, Andreas Krause, Debajyoti Ray

In the case of noise-free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near-optimally.

no code implementations • 21 Mar 2010 • Daniel Golovin, Andreas Krause

Solving stochastic optimization problems under partial observability, where one needs to adaptively make decisions with uncertain outcomes, is a fundamental but notoriously difficult challenge.

2 code implementations • 21 Dec 2009 • Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias Seeger

Many applications require optimizing an unknown, noisy function that is expensive to evaluate.

no code implementations • NeurIPS 2009 • Matthew Streeter, Daniel Golovin, Andreas Krause

Which ads should we display in sponsored search in order to maximize our revenue?

1 code implementation • SIGKDD 2007 • Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, Natalie Glance

We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.