Search Results for author: Scott Yang

Found 13 papers, 2 papers with code

AdaNet: A Scalable and Flexible Framework for Automatically Learning Ensembles

1 code implementation30 Apr 2019 Charles Weill, Javier Gonzalvo, Vitaly Kuznetsov, Scott Yang, Scott Yak, Hanna Mazzawi, Eugen Hotaj, Ghassen Jerfel, Vladimir Macko, Ben Adlam, Mehryar Mohri, Corinna Cortes

AdaNet is a lightweight TensorFlow-based (Abadi et al., 2015) framework for automatically learning high-quality ensembles with minimal expert intervention.

Neural Architecture Search

Efficient Gradient Computation for Structured Output Learning with Rational and Tropical Losses

no code implementations NeurIPS 2018 Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang

In this paper, we design efficient gradient computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses.

Structured Prediction

Semi-automated Annotation of Signal Events in Clinical EEG Data

no code implementations3 Jan 2018 Scott Yang, Silvia Lopez, Meysam Golmohammadi, Iyad Obeid, Joseph Picone

In this study, we investigated the effectiveness of using an active learning algorithm to automatically annotate a large EEG corpus.

Active Learning BIG-bench Machine Learning +1

Online Learning with Transductive Regret

no code implementations NeurIPS 2017 Mehryar Mohri, Scott Yang

A by-product of our study is an algorithm for swap regret, which, under mild assumptions, is more efficient than existing ones, and a substantially more efficient algorithm for time selection swap regret.

Discrepancy-Based Algorithms for Non-Stationary Rested Bandits

no code implementations29 Oct 2017 Corinna Cortes, Giulia Desalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang

We show that the notion of discrepancy can be used to design very general algorithms and a unified framework for the analysis of multi-armed rested bandit problems with non-stationary rewards.

Online Learning with Automata-based Expert Sequences

no code implementations29 Apr 2017 Mehryar Mohri, Scott Yang

We consider a general framework of online learning with expert advice where regret is defined with respect to sequences of experts accepted by a weighted automaton.

Online Learning with Abstention

no code implementations ICML 2018 Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Scott Yang

In the stochastic setting, we first point out a bias problem that limits the straightforward extension of algorithms such as UCB-N to time-varying feedback graphs, as needed in this context.

Optimistic Bandit Convex Optimization

no code implementations NeurIPS 2016 Scott Yang, Mehryar Mohri

We introduce the general and powerful scheme of predicting information re-use in optimization algorithms.

Structured Prediction Theory Based on Factor Graph Complexity

no code implementations NeurIPS 2016 Corinna Cortes, Mehryar Mohri, Vitaly Kuznetsov, Scott Yang

We give new data-dependent margin guarantees for structured prediction for a very wide family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition.

Structured Prediction

Accelerating Optimization via Adaptive Prediction

no code implementations18 Sep 2015 Mehryar Mohri, Scott Yang

We present a powerful general framework for designing data-dependent optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization.

Conditional Swap Regret and Conditional Correlated Equilibrium

no code implementations NeurIPS 2014 Mehryar Mohri, Scott Yang

We introduce a natural extension of the notion of swap regret, conditional swap regret, that allows for action modifications conditioned on the player’s action history.

Cannot find the paper you are looking for? You can Submit a new open access paper.