no code implementations • NeurIPS 2021 • Soumyadip Ghosh, Mark Squillante, Ebisa Wollega
Distributionally robust learning (DRL) is increasingly seen as a viable method to train machine learning models for improved model generalization.
no code implementations • 22 Dec 2020 • Soumyadip Ghosh, Mark Squillante
Seeking to improve model generalization, we consider a new approach based on distributionally robust learning (DRL) that applies stochastic gradient descent to the outer minimization problem.
no code implementations • NeurIPS 2020 • Nian Si, Jose Blanchet, Soumyadip Ghosh, Mark Squillante
We consider the problem of estimating the Wasserstein distance between the empirical measure and a set of probability measures whose expectations over a class of functions (hypothesis class) are constrained.
no code implementations • NeurIPS 2019 • Yingdong Lu, Mark Squillante, Chai Wah Wu
We consider a new family of stochastic operators for reinforcement learning with the goal of alleviating negative effects and becoming more robust to approximation or estimation errors.
no code implementations • 22 May 2018 • Soumyadip Ghosh, Mark Squillante, Ebisa Wollega
Distributionally robust optimization (DRO) problems are increasingly seen as a viable method to train machine learning models for improved model generalization.