Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model

ACL 2021  ·  Hongliang Dai, Yangqiu Song, Haixun Wang ·

Recently, there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.

PDF Abstract ACL 2021 PDF ACL 2021 Abstract

Results from the Paper


 Ranked #1 on Entity Typing on Ontonotes v5 (English) (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Entity Typing Ontonotes v5 (English) MLMET F1 49.1 # 1
Precision 53.6 # 1
Recall 45.3 # 1
Entity Typing Open Entity MLMET F1 49.1 # 6

Methods