We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.

PDF Abstract

Datasets


Introduced in the Paper:

HumanEval

Used in the Paper:

BIG-bench BBH APPS
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Code Generation APPS Codex 12B (Raw) Introductory Pass@1 4.14% # 8
Interview Pass@1 0.14% # 11
Competition Pass@1 0.02% # 8
Introductory Pass@1000 25.02% # 4
Interview Pass@1000 3.70% # 6
Competition Pass@1000 3.23% # 6
Competition Pass@5 0.09% # 4
Interview Pass@5 0.51% # 7
Introductory Pass@5 9.65% # 3
Competition Pass@any 3.32% # 7
Interview Pass@any 3.70% # 7
Introductory Pass@any 25.02% # 5
Multi-task Language Understanding BBH-alg code-davinci-002 175B (CoT) Average (%) 73.9 # 1
Multi-task Language Understanding BBH-nlp code-davinci-002 175B (CoT) Average (%) 73.5 # 3

Methods