Search Results for author: Matthew Staib

Found 6 papers, 3 papers with code

Distributionally Robust Optimization and Generalization in Kernel Methods

1 code implementation NeurIPS 2019 Matthew Staib, Stefanie Jegelka

We show that MMD DRO is roughly equivalent to regularization by the Hilbert norm and, as a byproduct, reveal deep connections to classic results in statistical learning.

Escaping Saddle Points with Adaptive Gradient Methods

no code implementations26 Jan 2019 Matthew Staib, Sashank J. Reddi, Satyen Kale, Sanjiv Kumar, Suvrit Sra

Adaptive methods such as Adam and RMSProp are widely used in deep learning but are not well understood.

Distributionally Robust Submodular Maximization

no code implementations14 Feb 2018 Matthew Staib, Bryan Wilder, Stefanie Jegelka

We also show compelling empirical evidence that DRO improves generalization to the unknown stochastic submodular function.

Parallel Streaming Wasserstein Barycenters

1 code implementation NeurIPS 2017 Matthew Staib, Sebastian Claici, Justin Solomon, Stefanie Jegelka

Our method is even robust to nonstationary input distributions and produces a barycenter estimate that tracks the input measures over time.

Bayesian Inference

Robust Budget Allocation via Continuous Submodular Functions

no code implementations ICML 2017 Matthew Staib, Stefanie Jegelka

The optimal allocation of resources for maximizing influence, spread of information or coverage, has gained attention in the past years, in particular in machine learning and data mining.

Cannot find the paper you are looking for? You can Submit a new open access paper.