no code implementations • NAACL 2016 • Dan Gillick, Cliff Brunk, Oriol Vinyals, Amarnag Subramanya
We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary.
no code implementations • TACL 2015 • Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, Fern Pereira, o
We present Plato, a probabilistic model for entity resolution that includes a novel approach for handling noisy or uninformative features, and supplements labeled training data derived from Wikipedia with a very large unlabeled text corpus.
no code implementations • NeurIPS 2009 • Amarnag Subramanya, Jeff A. Bilmes
We prove certain theoretical properties of a graph-regularized transductive learning objective that is based on minimizing a Kullback-Leibler divergence based loss.