Language Models as Knowledge Bases?

IJCNLP 2019 Fabio PetroniTim RocktäschelPatrick LewisAnton BakhtinYuxiang WuAlexander H. MillerSebastian Riedel

Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements... (read more)

PDF Abstract

Evaluation Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.