Search Results for author: Mark Squillante

Found 5 papers, 0 papers with code

Efficient Generalization with Distributionally Robust Learning

no code implementations NeurIPS 2021 Soumyadip Ghosh, Mark Squillante, Ebisa Wollega

Distributionally robust learning (DRL) is increasingly seen as a viable method to train machine learning models for improved model generalization.

Unbiased Gradient Estimation for Distributionally Robust Learning

no code implementations22 Dec 2020 Soumyadip Ghosh, Mark Squillante

Seeking to improve model generalization, we consider a new approach based on distributionally robust learning (DRL) that applies stochastic gradient descent to the outer minimization problem.

Quantifying the Empirical Wasserstein Distance to a Set of Measures: Beating the Curse of Dimensionality

no code implementations NeurIPS 2020 Nian Si, Jose Blanchet, Soumyadip Ghosh, Mark Squillante

We consider the problem of estimating the Wasserstein distance between the empirical measure and a set of probability measures whose expectations over a class of functions (hypothesis class) are constrained.

A Family of Robust Stochastic Operators for Reinforcement Learning

no code implementations NeurIPS 2019 Yingdong Lu, Mark Squillante, Chai Wah Wu

We consider a new family of stochastic operators for reinforcement learning with the goal of alleviating negative effects and becoming more robust to approximation or estimation errors.

reinforcement-learning Reinforcement Learning (RL)

Efficient Stochastic Gradient Descent for Learning with Distributionally Robust Optimization

no code implementations22 May 2018 Soumyadip Ghosh, Mark Squillante, Ebisa Wollega

Distributionally robust optimization (DRO) problems are increasingly seen as a viable method to train machine learning models for improved model generalization.

Cannot find the paper you are looking for? You can Submit a new open access paper.