Search Results for author: Hal Daume

Found 8 papers, 0 papers with code

Learned Prioritization for Trading Off Accuracy and Speed

no code implementations NeurIPS 2012 Jiarong Jiang, Adam Teichert, Jason Eisner, Hal Daume

Users want natural language processing (NLP) systems to be both fast and accurate, but quality often comes at the cost of speed.

Imitation Learning

Imitation Learning by Coaching

no code implementations NeurIPS 2012 He He, Jason Eisner, Hal Daume

However, it is important to note that these guarantees depend on how well the policy we found can imitate the oracle on the training data.

feature selection Imitation Learning

Simultaneously Leveraging Output and Task Structures for Multiple-Output Regression

no code implementations NeurIPS 2012 Piyush Rai, Abhishek Kumar, Hal Daume

In this paper, we present a multiple-output regression model that leverages the covariance structure of the functions (i. e., how the multiple functions are related with each other) as well as the conditional covariance structure of the outputs.

regression

Co-regularized Multi-view Spectral Clustering

no code implementations NeurIPS 2011 Abhishek Kumar, Piyush Rai, Hal Daume

In many clustering problems, we have access to multiple views of the data each of which could be individually used for clustering.

Clustering

Message-Passing for Approximate MAP Inference with Latent Variables

no code implementations NeurIPS 2011 Jiarong Jiang, Piyush Rai, Hal Daume

We consider a general inference setting for discrete probabilistic graphical models where we seek maximum a posteriori (MAP) estimates for a subset of the random variables (max nodes), marginalizing over the rest (sum nodes).

Learning Multiple Tasks using Manifold Regularization

no code implementations NeurIPS 2010 Arvind Agarwal, Samuel Gerber, Hal Daume

We present a novel method for multitask learning (MTL) based on {\it manifold regularization}: assume that all task parameters lie on a manifold.

Multi-Label Prediction via Sparse Infinite CCA

no code implementations NeurIPS 2009 Piyush Rai, Hal Daume

Canonical Correlation Analysis (CCA) is a useful technique for modeling dependencies between two (or more) sets of variables.

Supervised dimensionality reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.