LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at

PDF Abstract EMNLP 2020 PDF EMNLP 2020 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Named Entity Recognition CoNLL 2003 (English) LUKE F1 93.91 # 6
Question Answering SQuAD1.1 LUKE (single model) EM 90.202 # 2
F1 95.379 # 3
Question Answering SQuAD1.1 LUKE EM 90.2 # 4
F1 95.4 # 2
Hardware Burden None # 1
Operations per network pass None # 1
Question Answering SQuAD1.1 dev LUKE EM 89.8 # 2
F1 95 # 4
Question Answering SQuAD2.0 LUKE (single model) EM 87.429 # 83
F1 90.163 # 83
Relation Extraction TACRED LUKE F1 72.7 # 11
F1 (1% Few-Shot) 17.0 # 4
F1 (5% Few-Shot) 51.6 # 3
F1 (10% Few-Shot) 60.6 # 4
Question Answering TACRED LUKE Relation F1 72.7 # 1