no code implementations • IJCNLP 2019 • Andrew Drozdov, Patrick Verga, Yi-Pei Chen, Mohit Iyyer, Andrew McCallum
Understanding text often requires identifying meaningful constituent spans such as noun phrases and verb phrases.
1 code implementation • NAACL 2019 • Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum
We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.
no code implementations • EMNLP 2018 • Nathan Greenberg, Trapit Bansal, Patrick Verga, Andrew McCallum
This paper presents a method for training a single CRF extractor from multiple datasets with disjoint or partially overlapping sets of entity types.
1 code implementation • EMNLP 2018 • Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum
Unlike previous models which require significant pre-processing to prepare linguistic features, LISA can incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates.
1 code implementation • NAACL 2018 • Patrick Verga, Emma Strubell, Andrew McCallum
Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention.
no code implementations • 15 Nov 2017 • Shikhar Murty, Patrick Verga, Luke Vilnis, Andrew McCallum
We consider the challenging problem of entity typing over an extremely fine grained set of types, wherein a single mention or entity can have many simultaneous and often hierarchically-structured types.
no code implementations • 23 Oct 2017 • Patrick Verga, Emma Strubell, Ofer Shai, Andrew McCallum
We propose a model to consider all mention and entity pairs simultaneously in order to make a prediction.
4 code implementations • EMNLP 2017 • Emma Strubell, Patrick Verga, David Belanger, Andrew McCallum
Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.
Ranked #24 on Named Entity Recognition (NER) on Ontonotes v5 (English)
1 code implementation • EACL 2017 • Patrick Verga, Arvind Neelakantan, Andrew McCallum
In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.
1 code implementation • WS 2016 • Patrick Verga, Andrew McCallum
In experimental results on the FB15k-237 benchmark we demonstrate that we can match the performance of a comparable model with explicit entity pair representations using a model of attention over relation types.
1 code implementation • NAACL 2016 • Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew McCallum
In response, this paper introduces significant further improvements to the coverage and flexibility of universal schema relation extraction: predictions for entities unseen in training and multilingual transfer learning to domains with no annotation.