Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model

ICLR 2020 Wenhan XiongJingfei DuWilliam Yang WangVeselin Stoyanov

Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge... (read more)

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper