no code implementations • 10 Jul 2023 • Anand Deo, Karthyek Murthy
This paper provides an introductory overview of how one may employ importance sampling effectively as a tool for solving stochastic optimization formulations incorporating tail risk measures such as Conditional Value-at-Risk.
no code implementations • 12 Aug 2022 • Yanqiu Ruan, Xiaobo Li, Karthyek Murthy, Karthik Natarajan
The marginal distribution model (MDM) is one such model, that requires only the specification of marginal distributions of the random utilities.
no code implementations • 26 Jun 2022 • Anand Deo, Karthyek Murthy, Tirtho Sarker
This paper investigates the use of retrospective approximation solution paradigm in solving risk-averse optimization problems effectively via importance sampling (IS).
no code implementations • 4 Aug 2021 • Jose Blanchet, Karthyek Murthy, Viet Anh Nguyen
We consider statistical methods which invoke a min-max distributionally robust formulation to extract good out-of-sample performance in data-driven optimization and learning problems.
no code implementations • 16 Jun 2021 • Anand Deo, Karthyek Murthy
This paper considers Importance Sampling (IS) for the estimation of tail risks of a loss defined in terms of a sophisticated object such as a machine learning feature map or a mixed integer linear optimisation formulation.
no code implementations • 2 Jun 2021 • Nian Si, Karthyek Murthy, Jose Blanchet, Viet Anh Nguyen
We present a statistical testing framework to detect if a given machine learning classifier fails to satisfy a wide range of group fairness notions.
no code implementations • 14 Feb 2021 • Anand Deo, Karthyek Murthy
This paper presents a novel Importance Sampling (IS) scheme for estimating distribution tails of performance measures modeled with a rich set of tools such as linear programs, integer linear programs, piecewise linear/quadratic objectives, feature maps specified with deep neural networks, etc.
no code implementations • 4 Jun 2019 • Jose Blanchet, Karthyek Murthy, Nian Si
Wasserstein distributionally robust optimization estimators are obtained as solutions of min-max problems in which the statistician selects a parameter minimizing the worst-case loss among all probability models within a certain distance (in a Wasserstein sense) from the underlying empirical measure.
1 code implementation • 4 Oct 2018 • Jose Blanchet, Karthyek Murthy, Fan Zhang
We consider optimal transport based distributionally robust optimization (DRO) problems with locally strongly convex transport cost functions and affine decision rules.
Optimization and Control Primary: 90C15, Secondary: 65K05, 90C47
no code implementations • 19 May 2017 • Jose Blanchet, Yang Kang, Fan Zhang, Karthyek Murthy
Recently, (Blanchet, Kang, and Murhy 2016, and Blanchet, and Kang 2017) showed that several machine learning algorithms, such as square-root Lasso, Support Vector Machines, and regularized logistic regression, among many others, can be represented exactly as distributionally robust optimization (DRO) problems.