XLNet: Generalized Autoregressive Pretraining for Language Understanding

NeurIPS 2019 Zhilin YangZihang DaiYiming YangJaime CarbonellRuss R. SalakhutdinovQuoc V. Le

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy... (read more)

PDF Abstract

Evaluation Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.