Revisiting the Negative Data of Distantly Supervised Relation Extraction

Distantly supervision automatically generates plenty of training samples for relation extraction. However, it also incurs two major problems: noisy labels and imbalanced training data. Previous works focus more on reducing wrongly labeled relations (false positives) while few explore the missing relations that are caused by incompleteness of knowledge base (false negatives). Furthermore, the quantity of negative labels overwhelmingly surpasses the positive ones in previous problem formulations. In this paper, we first provide a thorough analysis of the above challenges caused by negative data. Next, we formulate the problem of relation extraction into as a positive unlabeled learning task to alleviate false negative problem. Thirdly, we propose a pipeline approach, dubbed \textsc{ReRe}, that performs sentence-level relation detection then subject/object extraction to achieve sample-efficient training. Experimental results show that the proposed method consistently outperforms existing approaches and remains excellent performance even learned with a large quantity of false positive samples.

PDF Abstract ACL 2021 PDF ACL 2021 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Extraction NYT10-HRL ReRe (exact) F1 73.4 # 2
Relation Extraction NYT10-HRL ReRe F1 73.95 # 1
Relation Extraction NYT11-HRL RERE F1 0.5623 # 1
Relation Extraction NYT11-HRL ReRe (exact) F1 0.5547 # 3
Relation Extraction NYT21 ReRe (exact) F1 58.88 # 2
Relation Extraction NYT21 ReRe F1 59.62 # 1
Relation Extraction NYT21 CasRel (exact) F1 54.78 # 4
Relation Extraction NYT21 TPLinker(exact) F1 57.33 # 3
Relation Extraction SKE CasRel (exact) F1 86.45 # 2
Relation Extraction SKE ReRe (exact) F1 87.21 # 1
Relation Extraction SKE TPLinker (exact) F1 84.32 # 3


No methods listed for this paper. Add relevant methods here