Language Models

BERT, or Bidirectional Encoder Representations from Transformers, improves upon standard Transformers by removing the unidirectionality constraint by using a masked language model (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a next sentence prediction task that jointly pre-trains text-pair representations.

There are two steps in BERT: pre-training and fine-tuning. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they are initialized with the same pre-trained parameters.

Source: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 110 12.39%
Retrieval 86 9.68%
Question Answering 50 5.63%
Text Classification 38 4.28%
Sentence 35 3.94%
Large Language Model 35 3.94%
Sentiment Analysis 33 3.72%
NER 20 2.25%
Classification 18 2.03%

Categories