Search Results for author: Tamay Besiroglu

Found 10 papers, 3 papers with code

Algorithmic progress in language models

1 code implementation9 Mar 2024 Anson Ho, Tamay Besiroglu, Ege Erdil, David Owen, Robi Rahman, Zifan Carl Guo, David Atkinson, Neil Thompson, Jaime Sevilla

We investigate the rate at which algorithms for pre-training language models have improved since the advent of deep learning.

Language Modelling

Who is leading in AI? An analysis of industry AI research

1 code implementation24 Nov 2023 Ben Cottier, Tamay Besiroglu, David Owen

The data reveals a diverse ecosystem of companies steering AI progress, though US labs such as Google, OpenAI and Meta lead across critical metrics.

Explosive growth from AI automation: A review of the arguments

no code implementations20 Sep 2023 Ege Erdil, Tamay Besiroglu

Key questions remain about the intensity of regulatory responses to AI, physical bottlenecks in production, the economic value of superhuman abilities, and the rate at which AI automation could occur.

Economic impacts of AI-augmented R&D

no code implementations15 Dec 2022 Tamay Besiroglu, Nicholas Emery-Xu, Neil Thompson

Since its emergence around 2010, deep learning has rapidly become the most important technique in Artificial Intelligence (AI), producing an array of scientific firsts in areas as diverse as protein folding, drug discovery, integrated chip design, and weather prediction.

Drug Discovery Protein Folding

Algorithmic progress in computer vision

no code implementations10 Dec 2022 Ege Erdil, Tamay Besiroglu

Using Shapley values to attribute performance improvements, we find that algorithmic improvements have been roughly as important as the scaling of compute for progress computer vision.

Attribute Image Classification

Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning

no code implementations26 Oct 2022 Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, Anson Ho

We analyze the growth of dataset sizes used in machine learning for natural language processing and computer vision, and extrapolate these using two methods; using the historical growth rate and estimating the compute-optimal dataset size for future predicted compute budgets.

Compute Trends Across Three Eras of Machine Learning

1 code implementation11 Feb 2022 Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Villalobos

Since the advent of Deep Learning in the early 2010s, the scaling of training compute has accelerated, doubling approximately every 6 months.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.