no code implementations • 13 Sep 2020 • Jose Blanchet, Yang Kang, Jose Luis Montiel Olea, Viet Anh Nguyen, Xuhui Zhang
This paper shows that dropout training in Generalized Linear Models is the minimax solution of a two-player, zero-sum game where an adversarial nature corrupts a statistician's covariates using a multiplicative nonparametric errors-in-variables model.
no code implementations • 20 May 2019 • Jose Blanchet, Yang Kang, Fan Zhang, Zhangyi Hu
Distributionally Robust Optimization (DRO) has been shown to provide a flexible framework for decision making under uncertainty and statistical estimation.
no code implementations • 18 Sep 2018 • Kamal Al-Sabahi, Zhang Zuping, Yang Kang
The model has a bidirectional encoder-decoder architecture; in which the encoder and the decoder are bidirectional LSTMs.
no code implementations • 8 Jul 2018 • Kamal Al-Sabahi, Zhang Zuping, Yang Kang
The new schemes combine the strength of the traditional weighting schemes and word embedding.
no code implementations • 19 May 2017 • Jose Blanchet, Yang Kang, Fan Zhang, Karthyek Murthy
Recently, (Blanchet, Kang, and Murhy 2016, and Blanchet, and Kang 2017) showed that several machine learning algorithms, such as square-root Lasso, Support Vector Machines, and regularized logistic regression, among many others, can be represented exactly as distributionally robust optimization (DRO) problems.
no code implementations • 19 May 2017 • Jose Blanchet, Yang Kang, Fan Zhang, Fei He, Zhangyi Hu
Data-driven Distributionally Robust Optimization (DD-DRO) via optimal transport has been shown to encompass a wide range of popular machine learning algorithms.
no code implementations • 28 Feb 2017 • Jose Blanchet, Yang Kang
We propose a novel method for semi-supervised learning (SSL) based on data-driven distributionally robust optimization (DRO) using optimal transport metrics.