GPT-2 is a Transformer architecture that was notable for its size (1.5 billion parameters) on its release. The model is pretrained on a WebText dataset - text from 45 million website links. It largely follows the previous GPT architecture with some modifications:
Layer normalization is moved to the input of each sub-block, similar to a pre-activation residual network and an additional layer normalization was added after the final self-attention block.
A modified initialization which accounts for the accumulation on the residual path with model depth is used. Weights of residual layers are scaled at initialization by a factor of $1/\sqrt{N}$ where $N$ is the number of residual layers.
The vocabulary is expanded to 50,257. The context size is expanded from 512 to 1024 tokens and a larger batch size of 512 is used.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 162 | 22.04% |
Text Generation | 101 | 13.74% |
Question Answering | 27 | 3.67% |
Retrieval | 13 | 1.77% |
Response Generation | 13 | 1.77% |
Dialogue Generation | 12 | 1.63% |
Machine Translation | 11 | 1.50% |
Reinforcement Learning (RL) | 10 | 1.36% |
Natural Language Understanding | 10 | 1.36% |