Learning to Compute Word Embeddings On the Fly

Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the "long tail" of this distribution requires enormous amounts of data... Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling. read more

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering SQuAD1.1 OTF dict+spelling (single) EM 64.083 # 178
F1 73.056 # 182
Hardware Burden None # 1
Operations per network pass None # 1
Question Answering SQuAD1.1 OTF spelling (single) EM 62.897 # 180
F1 72.016 # 183
Hardware Burden None # 1
Operations per network pass None # 1
Question Answering SQuAD1.1 OTF spelling+lemma (single) EM 62.604 # 181
F1 71.968 # 184
Hardware Burden None # 1
Operations per network pass None # 1
Question Answering SQuAD1.1 dev OTF dict+spelling (single) EM 63.06 # 40

Methods


No methods listed for this paper. Add relevant methods here