Relation classification models are conventionally evaluated using only a single measure, e. g., micro-F1, macro-F1 or AUC.
Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data.
Definition Extraction systems are a valuable knowledge source for both humans and algorithms.
Named Entity Recognition (NER) in domains like e-commerce is an understudied problem due to the lack of annotated datasets.
TACRED (Zhang et al., 2017) is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE).
Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions.
Despite the recent progress, little is known about the features captured by state-of-the-art neural relation extraction (RE) models.
Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain.
Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels.
Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions.
Ranked #13 on Relation Extraction on SemEval-2010 Task 8