no code implementations • NeurIPS 2015 • Tzu-Kuo Huang, Alekh Agarwal, Daniel J. Hsu, John Langford, Robert E. Schapire
We develop a new active learning algorithm for the streaming setting satisfying three important properties: 1) It provably works for any classifier representation and classification problem including those with severe noise.
no code implementations • NeurIPS 2014 • Alekh Agarwal, Alina Beygelzimer, Daniel J. Hsu, John Langford, Matus J. Telgarsky
Can we effectively learn a nonlinear representation in time comparable to linear learning?
no code implementations • NeurIPS 2013 • James Y. Zou, Daniel J. Hsu, David C. Parkes, Ryan P. Adams
In many natural settings, the analysis goal is not to characterize a single data set in isolation, but rather to understand the difference between one set of observations and another.
no code implementations • NeurIPS 2012 • Anima Anandkumar, Daniel J. Hsu, Furong Huang, Sham M. Kakade
We consider unsupervised estimation of mixtures of discrete graphical models, where the class variable is hidden and each mixture component can have a potentially different Markov graph structure and parameters over the observed variables.
no code implementations • NeurIPS 2012 • Daniel J. Hsu, Sham M. Kakade, Percy S. Liang
This paper explores unsupervised learning of parsing models along two directions.
no code implementations • NeurIPS 2012 • Anima Anandkumar, Dean P. Foster, Daniel J. Hsu, Sham M. Kakade, Yi-Kai Liu
This work provides a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of topic models, including Latent Dirichlet Allocation (LDA).
no code implementations • NeurIPS 2011 • Alekh Agarwal, Dean P. Foster, Daniel J. Hsu, Sham M. Kakade, Alexander Rakhlin
This paper addresses the problem of minimizing a convex, Lipschitz function $f$ over a convex, compact set $X$ under a stochastic bandit feedback model.
no code implementations • NeurIPS 2011 • Animashree Anandkumar, Kamalika Chaudhuri, Daniel J. Hsu, Sham M. Kakade, Le Song, Tong Zhang
The setting is one where we only have samples from certain observed variables in the tree, and our goal is to estimate the tree structure (i. e., the graph of how the underlying hidden variables are connected to each other and to the observed variables).
no code implementations • NeurIPS 2010 • Alina Beygelzimer, Daniel J. Hsu, John Langford, Tong Zhang
We present and analyze an agnostic active learning algorithm that works without keeping a version space.
no code implementations • NeurIPS 2009 • Kamalika Chaudhuri, Yoav Freund, Daniel J. Hsu
Previous algorithms for learning in this framework have a tunable learning rate parameter, and a major barrier to using online-learning in practical applications is that it is not understood how to set this parameter optimally, particularly when the number of actions is large.