Search Results for author: Soroosh Shafieezadeh-Abadeh

Found 8 papers, 6 papers with code

Semi-Discrete Optimal Transport: Hardness, Regularization and Numerical Solution

1 code implementation10 Mar 2021 Bahar Taskesen, Soroosh Shafieezadeh-Abadeh, Daniel Kuhn

Semi-discrete optimal transport problems, which evaluate the Wasserstein distance between a discrete and a generic (possibly non-discrete) probability measure, are believed to be computationally hard.

Bridging Bayesian and Minimax Mean Square Error Estimation via Wasserstein Distributionally Robust Optimization

1 code implementation8 Nov 2019 Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh, Daniel Kuhn, Peyman Mohajerin Esfahani

The proposed model can be viewed as a zero-sum game between a statistician choosing an estimator -- that is, a measurable function of the observation -- and a fictitious adversary choosing a prior -- that is, a pair of signal and noise distributions ranging over independent Wasserstein balls -- with the goal to minimize and maximize the expected squared estimation error, respectively.

Calculating Optimistic Likelihoods Using (Geodesically) Convex Optimization

1 code implementation NeurIPS 2019 Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh, Man-Chung Yue, Daniel Kuhn, Wolfram Wiesemann

A fundamental problem arising in many areas of machine learning is the evaluation of the likelihood of a given observation under different nominal distributions.

Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning

no code implementations23 Aug 2019 Daniel Kuhn, Peyman Mohajerin Esfahani, Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh

The goal of data-driven decision-making is to learn a decision from finitely many training samples that will perform well on unseen test samples.

Decision Making

Wasserstein Distributionally Robust Kalman Filtering

1 code implementation NeurIPS 2018 Soroosh Shafieezadeh-Abadeh, Viet Anh Nguyen, Daniel Kuhn, Peyman Mohajerin Esfahani

Despite the non-convex nature of the ambiguity set, we prove that the estimation problem is equivalent to a tractable convex program.

Regularization via Mass Transportation

1 code implementation27 Oct 2017 Soroosh Shafieezadeh-Abadeh, Daniel Kuhn, Peyman Mohajerin Esfahani

The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution.

Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.