Search Results for author: Daniel J. Hsu

Found 10 papers, 0 papers with code

Efficient and Parsimonious Agnostic Active Learning

no code implementations NeurIPS 2015 Tzu-Kuo Huang, Alekh Agarwal, Daniel J. Hsu, John Langford, Robert E. Schapire

We develop a new active learning algorithm for the streaming setting satisfying three important properties: 1) It provably works for any classifier representation and classification problem including those with severe noise.

Active Learning General Classification

Contrastive Learning Using Spectral Methods

no code implementations NeurIPS 2013 James Y. Zou, Daniel J. Hsu, David C. Parkes, Ryan P. Adams

In many natural settings, the analysis goal is not to characterize a single data set in isolation, but rather to understand the difference between one set of observations and another.

Contrastive Learning

Learning Mixtures of Tree Graphical Models

no code implementations NeurIPS 2012 Anima Anandkumar, Daniel J. Hsu, Furong Huang, Sham M. Kakade

We consider unsupervised estimation of mixtures of discrete graphical models, where the class variable is hidden and each mixture component can have a potentially different Markov graph structure and parameters over the observed variables.

A Spectral Algorithm for Latent Dirichlet Allocation

no code implementations NeurIPS 2012 Anima Anandkumar, Dean P. Foster, Daniel J. Hsu, Sham M. Kakade, Yi-Kai Liu

This work provides a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of topic models, including Latent Dirichlet Allocation (LDA).

Clustering Topic Models

Stochastic convex optimization with bandit feedback

no code implementations NeurIPS 2011 Alekh Agarwal, Dean P. Foster, Daniel J. Hsu, Sham M. Kakade, Alexander Rakhlin

This paper addresses the problem of minimizing a convex, Lipschitz function $f$ over a convex, compact set $X$ under a stochastic bandit feedback model.

Spectral Methods for Learning Multivariate Latent Tree Structure

no code implementations NeurIPS 2011 Animashree Anandkumar, Kamalika Chaudhuri, Daniel J. Hsu, Sham M. Kakade, Le Song, Tong Zhang

The setting is one where we only have samples from certain observed variables in the tree, and our goal is to estimate the tree structure (i. e., the graph of how the underlying hidden variables are connected to each other and to the observed variables).

A Parameter-free Hedging Algorithm

no code implementations NeurIPS 2009 Kamalika Chaudhuri, Yoav Freund, Daniel J. Hsu

Previous algorithms for learning in this framework have a tunable learning rate parameter, and a major barrier to using online-learning in practical applications is that it is not understood how to set this parameter optimally, particularly when the number of actions is large.

Cannot find the paper you are looking for? You can Submit a new open access paper.