1 code implementation • 30 Apr 2019 • Charles Weill, Javier Gonzalvo, Vitaly Kuznetsov, Scott Yang, Scott Yak, Hanna Mazzawi, Eugen Hotaj, Ghassen Jerfel, Vladimir Macko, Ben Adlam, Mehryar Mohri, Corinna Cortes
AdaNet is a lightweight TensorFlow-based (Abadi et al., 2015) framework for automatically learning high-quality ensembles with minimal expert intervention.
no code implementations • NeurIPS 2018 • Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang
In this paper, we design efficient gradient computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses.
no code implementations • 3 Jan 2018 • Scott Yang, Silvia Lopez, Meysam Golmohammadi, Iyad Obeid, Joseph Picone
In this study, we investigated the effectiveness of using an active learning algorithm to automatically annotate a large EEG corpus.
no code implementations • 3 Jan 2018 • Silvia Lopez, Aaron Gross, Scott Yang, Meysam Golmohammadi, Iyad Obeid, Joseph Picone
In this study, we explore the impact this variability has on machine learning performance.
no code implementations • NeurIPS 2017 • Mehryar Mohri, Scott Yang
A by-product of our study is an algorithm for swap regret, which, under mild assumptions, is more efficient than existing ones, and a substantially more efficient algorithm for time selection swap regret.
no code implementations • 29 Oct 2017 • Corinna Cortes, Giulia Desalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang
We show that the notion of discrepancy can be used to design very general algorithms and a unified framework for the analysis of multi-armed rested bandit problems with non-stationary rewards.
no code implementations • 29 Apr 2017 • Mehryar Mohri, Scott Yang
We consider a general framework of online learning with expert advice where regret is defined with respect to sequences of experts accepted by a weighted automaton.
no code implementations • ICML 2018 • Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Scott Yang
In the stochastic setting, we first point out a bias problem that limits the straightforward extension of algorithms such as UCB-N to time-varying feedback graphs, as needed in this context.
no code implementations • NeurIPS 2016 • Scott Yang, Mehryar Mohri
We introduce the general and powerful scheme of predicting information re-use in optimization algorithms.
2 code implementations • ICML 2017 • Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang
We present new algorithms for adaptively learning artificial neural networks.
no code implementations • NeurIPS 2016 • Corinna Cortes, Mehryar Mohri, Vitaly Kuznetsov, Scott Yang
We give new data-dependent margin guarantees for structured prediction for a very wide family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition.
no code implementations • 18 Sep 2015 • Mehryar Mohri, Scott Yang
We present a powerful general framework for designing data-dependent optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization.
no code implementations • NeurIPS 2014 • Mehryar Mohri, Scott Yang
We introduce a natural extension of the notion of swap regret, conditional swap regret, that allows for action modifications conditioned on the player’s action history.