Search Results for author: James Petterson

Found 7 papers, 0 papers with code

Learning as MAP Inference in Discrete Graphical Models

no code implementations NeurIPS 2012 Xianghang Liu, James Petterson, Tibério S. Caetano

Instead of relying on convex losses and regularisers such as in SVMs, logistic regression and boosting, or instead non-convex but continuous formulations such as those encountered in neural networks and deep belief networks, our framework entails a non-convex but \emph{discrete} formulation, where estimation amounts to finding a MAP configuration in a graphical model whose potential functions are low-dimensional discrete surrogates for the misclassification loss.

Binary Classification feature selection

Submodular Multi-Label Learning

no code implementations NeurIPS 2011 James Petterson, Tibério S. Caetano

The key novelty of our formulation is that we explicitly allow for assortative (submodular) pairwise label interactions, i. e., we can leverage the co-ocurrence of pairs of labels in order to improve the quality of prediction.

Multi-Label Learning

Multitask Learning without Label Correspondences

no code implementations NeurIPS 2010 Novi Quadrianto, James Petterson, Tibério S. Caetano, Alex J. Smola, S. V. N. Vishwanathan

We propose an algorithm to perform multitask learning where each task has potentially distinct label sets and label correspondences are not readily available.

Data Integration General Classification

Word Features for Latent Dirichlet Allocation

no code implementations NeurIPS 2010 James Petterson, Wray Buntine, Shravan M. Narayanamurthy, Tibério S. Caetano, Alex J. Smola

We extend Latent Dirichlet Allocation (LDA) by explicitly allowing for the encoding of side information in the distribution over words.

Reverse Multi-Label Learning

no code implementations NeurIPS 2010 James Petterson, Tibério S. Caetano

Multi-label classification is the task of predicting potentially multiple labels for a given instance.

Classification Document Classification +3

Distribution Matching for Transduction

no code implementations NeurIPS 2009 Novi Quadrianto, James Petterson, Alex J. Smola

Many transductive inference algorithms assume that distributions over training and test estimates should be related, e. g. by providing a large margin of separation on both sets.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.