1 code implementation • 12 May 2020 • Dirk Lauinger, François Vuille, Daniel Kuhn
We formulate a robust optimization problem that maximizes a vehicle owner's expected profit from selling primary frequency regulation to the grid and guarantees that market commitments are met at all times for all frequency deviation trajectories in a functional uncertainty set that encodes applicable legislation.
Optimization and Control
1 code implementation • 27 Oct 2017 • Soroosh Shafieezadeh-Abadeh, Daniel Kuhn, Peyman Mohajerin Esfahani
The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution.
1 code implementation • 24 Apr 2023 • Johannes Schimunek, Philipp Seidl, Lukas Friedrich, Daniel Kuhn, Friedrich Rippmann, Sepp Hochreiter, Günter Klambauer
Our novel concept for molecule representation enrichment is to associate molecules from both the support set and the query set with a large set of reference (context) molecules through a Modern Hopfield Network.
1 code implementation • NeurIPS 2018 • Soroosh Shafieezadeh-Abadeh, Viet Anh Nguyen, Daniel Kuhn, Peyman Mohajerin Esfahani
Despite the non-convex nature of the ambiguity set, we prove that the estimation problem is equivalent to a tractable convex program.
1 code implementation • 27 Jun 2022 • Roland Schwan, Colin N. Jones, Daniel Kuhn
We provide sufficient conditions for the closed-loop stability of the candidate policy in terms of the worst-case approximation error with respect to the baseline policy, and we show that these conditions can be checked by solving a Mixed-Integer Quadratic Program (MIQP).
1 code implementation • 7 Mar 2023 • Soroosh Shafieezadeh-Abadeh, Liviu Aolaritei, Florian Dörfler, Daniel Kuhn
We study optimal transport-based distributionally robust optimization problems where a fictitious adversary, often envisioned as nature, can choose the distribution of the uncertain problem parameters by reshaping a prescribed reference distribution at a finite transportation cost.
1 code implementation • 10 Mar 2021 • Bahar Taskesen, Soroosh Shafieezadeh-Abadeh, Daniel Kuhn
Semi-discrete optimal transport problems, which evaluate the Wasserstein distance between a discrete and a generic (possibly non-discrete) probability measure, are believed to be computationally hard.
1 code implementation • 1 Jun 2021 • Bahar Taskesen, Man-Chung Yue, Jose Blanchet, Daniel Kuhn, Viet Anh Nguyen
Given available data, we investigate novel strategies to synthesize a family of least squares estimator experts that are robust with regard to moment conditions.
1 code implementation • 7 Jun 2023 • Yves Rychener, Daniel Kuhn, Tobias Sutter
We develop a principled approach to end-to-end learning in stochastic optimization.
1 code implementation • NeurIPS 2019 • Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh, Man-Chung Yue, Daniel Kuhn, Wolfram Wiesemann
A fundamental problem arising in many areas of machine learning is the evaluation of the likelihood of a given observation under different nominal distributions.
1 code implementation • NeurIPS 2019 • Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh, Man-Chung Yue, Daniel Kuhn, Wolfram Wiesemann
The likelihood function is a fundamental component in Bayesian statistics.
1 code implementation • 12 Jun 2021 • Mengmeng Li, Tobias Sutter, Daniel Kuhn
We study a stochastic program where the probability distribution of the uncertain problem parameters is unknown and only indirectly observed via finitely many correlated samples generated by an unknown Markov chain with $d$ states.
1 code implementation • 8 Nov 2019 • Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh, Daniel Kuhn, Peyman Mohajerin Esfahani
The proposed model can be viewed as a zero-sum game between a statistician choosing an estimator -- that is, a measurable function of the observation -- and a fictitious adversary choosing a prior -- that is, a pair of signal and noise distributions ranging over independent Wasserstein balls -- with the goal to minimize and maximize the expected squared estimation error, respectively.
1 code implementation • 30 May 2022 • Yves Rychener, Bahar Taskesen, Daniel Kuhn
This means that the distributions of the predictions within the two groups should be close with respect to the Kolmogorov distance, and fairness is achieved by penalizing the dissimilarity of these two distributions in the objective function of the learning problem.
no code implementations • 18 May 2018 • Viet Anh Nguyen, Daniel Kuhn, Peyman Mohajerin Esfahani
We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a $p$-dimensional Gaussian random vector from $n$ independent samples.
no code implementations • 22 May 2017 • Napat Rujeerapaiboon, Kilian Schindler, Daniel Kuhn, Wolfram Wiesemann
Plain vanilla K-means clustering has proven to be successful in practice, yet it suffers from outlier sensitivity and may produce highly unbalanced clusters.
no code implementations • NeurIPS 2015 • Soroosh Shafieezadeh-Abadeh, Peyman Mohajerin Esfahani, Daniel Kuhn
This paper proposes a distributionally robust approach to logistic regression.
no code implementations • NeurIPS 2013 • Grani Adiwena Hanasusanto, Daniel Kuhn
In stochastic optimal control the distribution of the exogenous noise is typically unknown and must be inferred from limited data before dynamic programming (DP)-based solution schemes can be applied.
no code implementations • 7 Mar 2019 • Ekaterina Abramova, Luke Dickens, Daniel Kuhn, Aldo Faisal
We show that a small number of locally optimal linear controllers are able to solve global nonlinear control problems with unknown dynamics when combined with a reinforcement learner in this hierarchical framework.
no code implementations • 23 Aug 2019 • Daniel Kuhn, Peyman Mohajerin Esfahani, Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh
The goal of data-driven decision-making is to learn a decision from finitely many training samples that will perform well on unseen test samples.
no code implementations • 15 Apr 2020 • Man-Chung Yue, Daniel Kuhn, Wolfram Wiesemann
In this technical note we prove that the Wasserstein ball is weakly compact under mild conditions, and we offer necessary and sufficient conditions for the existence of optimal solutions.
no code implementations • 18 Jul 2020 • Bahar Taskesen, Viet Anh Nguyen, Daniel Kuhn, Jose Blanchet
We propose a distributionally robust logistic regression model with an unfairness penalty that prevents discrimination with respect to sensitive attributes such as gender or ethnicity.
no code implementations • 9 Dec 2020 • Bahar Taskesen, Jose Blanchet, Daniel Kuhn, Viet Anh Nguyen
Leveraging the geometry of the feature space, the test statistic quantifies the distance of the empirical distribution supported on the test samples to the manifold of distributions that render a pre-trained classifier fair.
no code implementations • 5 Mar 2021 • Wouter Jongeneel, Tobias Sutter, Daniel Kuhn
Two dynamical systems are topologically equivalent when their phase-portraits can be morphed into each other by a homeomorphic coordinate transformation on the state space.
Optimization and Control
no code implementations • 9 Mar 2021 • Wouter Jongeneel, Man-Chung Yue, Daniel Kuhn
Most zeroth-order optimization algorithms mimic a first-order algorithm but replace the gradient of the objective function with some gradient estimator that can be computed from a small number of function evaluations.
Optimization and Control 65D25, 65G50, 65K05, 65Y04, 65Y20, 90C56
2 code implementations • NeurIPS 2021 • Tobias Sutter, Andreas Krause, Daniel Kuhn
Training models that perform well under distribution shifts is a central challenge in machine learning.
no code implementations • 18 Dec 2021 • Viet Anh Nguyen, Soroosh Shafiee, Damir Filipović, Daniel Kuhn
We introduce a universal framework for mean-covariance robust risk measurement and portfolio optimization.
no code implementations • 2 Mar 2022 • Bahar Taşkesen, Soroosh Shafieezadeh-Abadeh, Daniel Kuhn, Karthik Natarajan
We study the computational complexity of the optimal transport problem that evaluates the Wasserstein distance between the distributions of two K-dimensional discrete random vectors.
no code implementations • 28 Nov 2022 • Halil İbrahim Bayrak, Çağıl Koçyiğit, Daniel Kuhn, Mustafa Çelebi Pınar
However, this result relies on the unrealistic assumptions that the agents' types follow known independent probability distributions.
no code implementations • 30 May 2023 • Mengmeng Li, Daniel Kuhn, Tobias Sutter
We propose policy gradient algorithms for robust infinite-horizon Markov decision processes (MDPs) with non-rectangular uncertainty sets, thereby addressing an open challenge in the robust MDP literature.
no code implementations • 10 Aug 2023 • Jose Blanchet, Daniel Kuhn, Jiajin Li, Bahar Taskesen
In the past few years, there has been considerable interest in two prominent approaches for Distributionally Robust Optimization (DRO): Divergence-based and Wasserstein-based methods.
no code implementations • 13 Nov 2023 • Wouter Jongeneel, Mengmeng Li, Daniel Kuhn
Motivated by policy gradient methods in the context of reinforcement learning, we derive the first large deviation rate function for the iterates generated by stochastic gradient descent for possibly non-convex objectives satisfying a Polyak-Lojasiewicz condition.