Deep contextualized word representations

HLT 2018 Matthew E. PetersMark NeumannMohit IyyerMatt GardnerChristopher ClarkKenton LeeLuke Zettlemoyer

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus... (read more)

PDF Abstract

Evaluation results from the paper


Task Dataset Model Metric name Metric value Global rank Uses extra
training data
Compare
Citation Intent Classification ACL-ARC BiLSTM-Attention + ELMo F1 54.6 # 3
Named Entity Recognition (NER) CoNLL 2003 (English) BiLSTM-CRF+ELMo F1 92.22 # 7
Coreference Resolution CoNLL 2012 (Lee et al., 2017)+ELMo Avg F1 70.4 # 2
Semantic Role Labeling OntoNotes (He et al., 2017) + ELMo F1 84.6 # 5
Natural Language Inference SNLI ESIM + ELMo % Test Accuracy 88.7 # 11
Natural Language Inference SNLI ESIM + ELMo % Train Accuracy 91.6 # 24
Natural Language Inference SNLI ESIM + ELMo Parameters 8.0m # 1
Natural Language Inference SNLI ESIM + ELMo Ensemble % Test Accuracy 89.3 # 7
Natural Language Inference SNLI ESIM + ELMo Ensemble % Train Accuracy 92.1 # 22
Natural Language Inference SNLI ESIM + ELMo Ensemble Parameters 40m # 1
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (ensemble) EM 81.003 # 22
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (ensemble) F1 87.432 # 24
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (single model) EM 78.580 # 41
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (single model) F1 85.833 # 41
Question Answering SQuAD2.0 BiDAF + Self Attention + ELMo (single model) EM 63.372 # 78
Question Answering SQuAD2.0 BiDAF + Self Attention + ELMo (single model) F1 66.251 # 84
Sentiment Analysis SST-5 Fine-grained classification BCN+ELMo Accuracy 54.7 # 3