no code implementations • NeurIPS 2021 • Corinna Cortes, Mehryar Mohri, Dmitry Storcheus, Ananda Theertha Suresh
We study the problem of learning accurate ensemble predictors, in particular boosting, in the presence of multiple source domains.
no code implementations • NeurIPS 2020 • Corinna Cortes, Mehryar Mohri, Javier Gonzalvo, Dmitry Storcheus
We further implement the algorithm in a popular symbolic gradient computation framework and empirically demonstrate on a number of datasets the benefits of $\almo$ framework versus learning with a fixed mixture weights distribution.
no code implementations • NeurIPS 2019 • Corinna Cortes, Mehryar Mohri, Dmitry Storcheus
We fill this gap by deriving data-dependent learning guarantees for \GB\ used with \emph{regularization}, expressed in terms of the Rademacher complexities of the constrained families of base predictors.
no code implementations • NeurIPS 2018 • Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang
In this paper, we design efficient gradient computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses.
1 code implementation • 26 Jun 2018 • Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi, Felix X. Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, Sanjiv Kumar
Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1. 1-3x) compared to the previous state-of-the-art methods.
no code implementations • 29 Sep 2015 • Mehryar Mohri, Afshin Rostamizadeh, Dmitry Storcheus
The generalization error bound is based on a careful analysis of the empirical Rademacher complexity of the relevant hypothesis set.