Deep contextualized word representations

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Results from the Paper


Ranked #3 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (Wasserstein Distance (WD) metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Citation Intent Classification ACL-ARC BiLSTM-Attention + ELMo F1 54.6 # 4
Named Entity Recognition (NER) CoNLL++ BiLSTM-CRF+ELMo F1 93.42 # 6
Named Entity Recognition (NER) CoNLL 2003 (English) BiLSTM-CRF+ELMo F1 92.22 # 44