Relation Extraction
696 papers with code • 49 benchmarks • 75 datasets
Relation Extraction is the task of predicting attributes and relations for entities in a sentence. For example, given a sentence “Barack Obama was born in Honolulu, Hawaii.”, a relation classifier aims at predicting the relation of “bornInCity”. Relation Extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.
Source: Deep Residual Learning for Weakly-Supervised Relation Extraction
Libraries
Use these libraries to find Relation Extraction models and implementationsDatasets
Subtasks
- Relation Classification
- Document-level Relation Extraction
- Joint Entity and Relation Extraction
- Temporal Relation Extraction
- Temporal Relation Extraction
- Dialog Relation Extraction
- Relationship Extraction (Distant Supervised)
- Continual Relation Extraction
- Binary Relation Extraction
- Zero-shot Relation Triplet Extraction
- 4-ary Relation Extraction
- Hyper-Relational Extraction
- DrugProt
- relation explanation
- Multi-Labeled Relation Extraction
- Relation Mention Extraction
Most implemented papers
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows.
LayoutLM: Pre-training of Text and Layout for Document Image Understanding
In this paper, we propose the \textbf{LayoutLM} to jointly model interactions between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
Matching the Blanks: Distributional Similarity for Relation Learning
General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction.
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer.
Are Transformers Effective for Time Series Forecasting?
Recently, there has been a surge of Transformer-based solutions for the long-term time series forecasting (LTSF) task.
Simplifying Graph Convolutional Networks
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations.
LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents.
Joint entity recognition and relation extraction as a multi-head selection problem
State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers.
The Natural Language Decathlon: Multitask Learning as Question Answering
Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting.
Enriching Pre-trained Language Model with Entity Information for Relation Classification
In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task.