Matching the Blanks: Distributional Similarity for Relation Learning

General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris' distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task's training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Relation Extraction SemEval-2010 Task-8 BERTEM+MTB F1 89.5 # 13
Relation Classification TACRED MTB Baldini Soares et al. (2019) F1 71.5 # 9
Relation Extraction TACRED BERTEM+MTB F1 71.5 # 16
F1 (1% Few-Shot) 43.4 # 3
F1 (10% Few-Shot) 64.8 # 3

Methods


No methods listed for this paper. Add relevant methods here