Search Results for author: Dmitry Storcheus

Found 6 papers, 1 papers with code

Boosting with Multiple Sources

no code implementations NeurIPS 2021 Corinna Cortes, Mehryar Mohri, Dmitry Storcheus, Ananda Theertha Suresh

We study the problem of learning accurate ensemble predictors, in particular boosting, in the presence of multiple source domains.

Federated Learning

Agnostic Learning with Multiple Objectives

no code implementations NeurIPS 2020 Corinna Cortes, Mehryar Mohri, Javier Gonzalvo, Dmitry Storcheus

We further implement the algorithm in a popular symbolic gradient computation framework and empirically demonstrate on a number of datasets the benefits of $\almo$ framework versus learning with a fixed mixture weights distribution.

Regularized Gradient Boosting

no code implementations NeurIPS 2019 Corinna Cortes, Mehryar Mohri, Dmitry Storcheus

We fill this gap by deriving data-dependent learning guarantees for \GB\ used with \emph{regularization}, expressed in terms of the Rademacher complexities of the constrained families of base predictors.

Generalization Bounds

Efficient Gradient Computation for Structured Output Learning with Rational and Tropical Losses

no code implementations NeurIPS 2018 Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang

In this paper, we design efficient gradient computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses.

Structured Prediction

Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling

1 code implementation26 Jun 2018 Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi, Felix X. Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, Sanjiv Kumar

Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1. 1-3x) compared to the previous state-of-the-art methods.

Extreme Multi-Label Classification Multi-Label Learning +1

Foundations of Coupled Nonlinear Dimensionality Reduction

no code implementations29 Sep 2015 Mehryar Mohri, Afshin Rostamizadeh, Dmitry Storcheus

The generalization error bound is based on a careful analysis of the empirical Rademacher complexity of the relevant hypothesis set.

Generalization Bounds Supervised dimensionality reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.