Relation Extraction
663 papers with code • 50 benchmarks • 74 datasets
Relation Extraction is the task of predicting attributes and relations for entities in a sentence. For example, given a sentence “Barack Obama was born in Honolulu, Hawaii.”, a relation classifier aims at predicting the relation of “bornInCity”. Relation Extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.
Source: Deep Residual Learning for Weakly-Supervised Relation Extraction
Libraries
Use these libraries to find Relation Extraction models and implementationsDatasets
Subtasks
- Relation Classification
- Document-level Relation Extraction
- Joint Entity and Relation Extraction
- Temporal Relation Extraction
- Temporal Relation Extraction
- Dialog Relation Extraction
- Relationship Extraction (Distant Supervised)
- Continual Relation Extraction
- Binary Relation Extraction
- Zero-shot Relation Triplet Extraction
- 4-ary Relation Extraction
- DrugProt
- Hyper-Relational Extraction
- relation explanation
- Multi-Labeled Relation Extraction
- Relation Mention Extraction
Most implemented papers
Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing
Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET.
SciBERT: A Pretrained Language Model for Scientific Text
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
A Novel Cascade Binary Tagging Framework for Relational Triple Extraction
Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction.
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages
We introduce Stanza, an open-source Python natural language processing toolkit supporting 66 human languages.
LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents.
DocRED: A Large-Scale Document-Level Relation Extraction Dataset
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs.
Are Transformers Effective for Time Series Forecasting?
Recently, there has been a surge of Transformer-based solutions for the long-term time series forecasting (LTSF) task.
A General Framework for Information Extraction using Dynamic Span Graphs
We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs.
Simple BERT Models for Relation Extraction and Semantic Role Labeling
We present simple BERT-based models for relation extraction and semantic role labeling.