Universal Language Model Fine-tuning for Text Classification

ACL 2018  ·  Jeremy Howard, Sebastian Ruder ·

Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model... Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100x more data. We open-source our pretrained models and code. read more

PDF Abstract ACL 2018 PDF ACL 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Text Classification AG News ULMFiT Error 5.01 # 4
Text Classification DBpedia ULMFiT Error 0.80 # 6
Sentiment Analysis IMDb ULMFiT Accuracy 95.4 # 10
Text Classification TREC-6 ULMFiT Error 3.6 # 3
Sentiment Analysis Yelp Binary classification ULMFiT Error 2.16 # 7
Sentiment Analysis Yelp Fine-grained classification ULMFiT Error 29.98 # 5

Methods