Search Results for author: Patrick Verga

Found 11 papers, 7 papers with code

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders

1 code implementation NAACL 2019 Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum

We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Constituency Grammar Induction Sentence

Linguistically-Informed Self-Attention for Semantic Role Labeling

1 code implementation EMNLP 2018 Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum

Unlike previous models which require significant pre-processing to prepare linguistic features, LISA can incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates.

Dependency Parsing Multi-Task Learning +4

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction

1 code implementation NAACL 2018 Patrick Verga, Emma Strubell, Andrew McCallum

Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention.

Relation Relation Extraction +1

Finer Grained Entity Typing with TypeNet

no code implementations15 Nov 2017 Shikhar Murty, Patrick Verga, Luke Vilnis, Andrew McCallum

We consider the challenging problem of entity typing over an extremely fine grained set of types, wherein a single mention or entity can have many simultaneous and often hierarchically-structured types.

Entity Typing

Generalizing to Unseen Entities and Entity Pairs with Row-less Universal Schema

1 code implementation EACL 2017 Patrick Verga, Arvind Neelakantan, Andrew McCallum

In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.

Matrix Completion

Row-less Universal Schema

1 code implementation WS 2016 Patrick Verga, Andrew McCallum

In experimental results on the FB15k-237 benchmark we demonstrate that we can match the performance of a comparable model with explicit entity pair representations using a model of attention over relation types.

Relation

Multilingual Relation Extraction using Compositional Universal Schema

1 code implementation NAACL 2016 Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew McCallum

In response, this paper introduces significant further improvements to the coverage and flexibility of universal schema relation extraction: predictions for entities unseen in training and multilingual transfer learning to domains with no annotation.

Relation Relation Extraction +4

Cannot find the paper you are looking for? You can Submit a new open access paper.