The relation of each sentence is first recognized by distant supervision methods, and then filtered by crowdworkers.
We propose a neural network model for joint extraction of named entities and relations between them, without any hand-crafted features.
#4 best model for Relation Extraction on CoNLL04
This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences.
The effects of the interaction between the temporal and the causal components, although limited, yield promising results and confirm the tight connection between the temporal and the causal dimension of texts.
This paper proposes a novel context-aware joint entity and word-level relation extraction approach through semantic composition of words, introducing a Table Filling Multi-Task Recurrent Neural Network (TF-MTRNN) model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies.
Distant supervision is a popular method for performing relation extraction from text that is known to produce noisy labels.
Distant supervision (DS) is a well-established method for relation extraction from text, based on the assumption that when a knowledge-base contains a relation between a term pair, then sentences that contain that pair are likely to express the relation.
Experimental performance on the task of relation classification has generally improved using deep neural network architectures.