Relation Classification
140 papers with code • 8 benchmarks • 23 datasets
Relation Classification is the task of identifying the semantic relation holding between two nominal entities in text.
Source: Structure Regularized Neural Network for Entity Relation Classification for Chinese Literature Text
Subtasks
Latest papers
Mutually Guided Few-shot Learning for Relational Triple Extraction
Specifically, our method consists of an entity-guided relation proto-decoder to classify the relations firstly and a relation-guided entity proto-decoder to extract entities based on the classified relations.
Annotation-Inspired Implicit Discourse Relation Classification with Auxiliary Discourse Connective Generation
Implicit discourse relation classification is a challenging task due to the absence of discourse connectives.
MatSci-NLP: Evaluating Scientific Language Models on Materials Science Language Tasks Using Text-to-Schema Modeling
Our experiments in this low-resource training setting show that language models pretrained on scientific text outperform BERT trained on general text.
End-to-End $n$-ary Relation Extraction for Combination Drug Therapies
Extracting combination therapies from scientific literature inherently constitutes an $n$-ary relation extraction problem.
Borrowing Human Senses: Comment-Aware Self-Training for Social Media Multimodal Classification
Social media is daily creating massive multimedia content with paired image and text, presenting the pressing need to automate the vision and language understanding for various multimodal classification tasks.
DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset
Joint entity and relation extraction (JERE) is one of the most important tasks in information extraction.
RankDNN: Learning to Rank for Few-shot Learning
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification.
Multilingual Relation Classification via Efficient and Effective Prompting
Prompting pre-trained language models has achieved impressive performance on various NLP tasks, especially in low data regimes.
Generative Prompt Tuning for Relation Classification
Current prompt tuning methods mostly convert the downstream tasks to masked language modeling problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces.
PcMSP: A Dataset for Scientific Action Graphs Extraction from Polycrystalline Materials Synthesis Procedure Text
Scientific action graphs extraction from materials synthesis procedures is important for reproducible research, machine automation, and material prediction.