Joint Entity and Relation Extraction
51 papers with code • 15 benchmarks • 12 datasets
Joint Entity and Relation Extraction is the task of extracting entity mentions and semantic relations between entities from unstructured text with a single model.
Datasets
Most implemented papers
Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles.
A General Framework for Information Extraction using Dynamic Span Graphs
We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs.
Entity, Relation, and Event Extraction with Contextualized Span Representations
We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction.
CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases
We propose a novel domain-independent framework, called CoType, that runs a data-driven text segmentation algorithm to extract entity mentions, and jointly embeds entity mentions, relation mentions, text features and type labels into two low-dimensional spaces (for entity and relation mentions respectively), where, in each space, objects whose types are close will also have similar representations.
Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme
Joint extraction of entities and relations is an important task in information extraction.
Span-based Joint Entity and Relation Extraction with Transformer Pre-training
The model is trained using strong within-sentence negative samples, which are efficiently extracted in a single BERT pass.
Two are Better than One: Joint Entity and Relation Extraction with Table-Sequence Encoders
In this work, we propose the novel {\em table-sequence encoders} where two different encoders -- a table encoder and a sequence encoder are designed to help each other in the representation learning process.
A Frustratingly Easy Approach for Entity and Relation Extraction
Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model.
A sequence-to-sequence approach for document-level relation extraction
In this paper, we develop a sequence-to-sequence approach, seq2rel, that can learn the subtasks of DocRE (entity extraction, coreference resolution and relation extraction) end-to-end, replacing a pipeline of task-specific components.
Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction
This paper proposes a novel context-aware joint entity and word-level relation extraction approach through semantic composition of words, introducing a Table Filling Multi-Task Recurrent Neural Network (TF-MTRNN) model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies.