Deep contextualized word representations

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Results from the Paper


Ranked #3 on Citation Intent Classification on ACL-ARC (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Citation Intent Classification ACL-ARC BiLSTM-Attention + ELMo F1 54.6 # 3
Named Entity Recognition (NER) CoNLL++ BiLSTM-CRF+ELMo F1 93.42 # 6
Named Entity Recognition (NER) CoNLL 2003 (English) BiLSTM-CRF+ELMo F1 92.22 # 42
Semantic Role Labeling OntoNotes He et al., 2017 + ELMo F1 84.6 # 12
Coreference Resolution OntoNotes e2e-coref + ELMo F1 70.4 # 16
Conversational Response Selection PolyAI Reddit ELMO 1-of-100 Accuracy 19.3% # 5
Natural Language Inference SNLI ESIM + ELMo % Test Accuracy 88.7 # 29
% Train Accuracy 91.6 # 34