GPT is a Transformer-based architecture and training procedure for natural language processing tasks. Training follows a two-stage procedure. First, a language modeling objective is used on the unlabeled data to learn the initial parameters of a neural network model. Subsequently, these parameters are adapted to a target task using the corresponding supervised objective.

Source: Improving Language Understanding by Generative Pre-Training


Paper Code Results Date Stars


Task Papers Share
Language Modelling 94 11.60%
Large Language Model 44 5.43%
Question Answering 36 4.44%
Text Generation 29 3.58%
Retrieval 22 2.72%
Decision Making 21 2.59%
Prompt Engineering 20 2.47%
Sentence 18 2.22%
In-Context Learning 18 2.22%