1 code implementation • 15 Aug 2022 • David Bieber, Kensen Shi, Petros Maniatis, Charles Sutton, Vincent Hellendoorn, Daniel Johnson, Daniel Tarlow
Graph representations of programs are commonly a central element of machine learning for code research.
1 code implementation • 21 Jul 2022 • David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-Dickstein, Kevin Murphy, Charles Sutton
Prompted models have demonstrated impressive few-shot learning abilities.
1 code implementation • 7 Mar 2022 • David Bieber, Rishab Goel, Daniel Zheng, Hugo Larochelle, Daniel Tarlow
This presents an interesting machine learning challenge: can we predict runtime errors in a "static" setting, where program execution is not possible?
no code implementations • NeurIPS 2021 • Shobha Vasudevan, Wenjie (Joe) Jiang, David Bieber, Rishabh Singh, hamid shojaei, C. Richard Ho, Charles Sutton
We evaluate Design2Vec on three real-world hardware designs, including an industrial chip used in commercial data centers.
no code implementations • 30 Nov 2021 • Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena
Large pre-trained language models perform remarkably well on tasks that can be done "in one pass", such as generating realistic text or synthesizing computer programs.
1 code implementation • NeurIPS 2020 • David Bieber, Charles Sutton, Hugo Larochelle, Daniel Tarlow
More practically, we evaluate these models on the task of learning to execute partial programs, as might arise if using the model as a heuristic function in program synthesis.
no code implementations • ICLR 2021 • Augustus Odena, Kensen Shi, David Bieber, Rishabh Singh, Charles Sutton, Hanjun Dai
Program synthesis is challenging largely because of the difficulty of search in a large space of programs.
1 code implementation • ICLR 2020 • Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, David Bieber
By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.
2 code implementations • NeurIPS Workshop CAP 2020 • Kensen Shi, David Bieber, Rishabh Singh
The success and popularity of deep learning is on the rise, partially due to powerful deep learning frameworks such as TensorFlow and PyTorch that make it easier to develop deep learning models.
1 code implementation • ICML 2020 • Kensen Shi, David Bieber, Charles Sutton
Sampling is a fundamental technique, and sampling without replacement is often desirable when duplicate samples are not beneficial.
no code implementations • 4 Apr 2019 • Rui Zhao, David Bieber, Kevin Swersky, Daniel Tarlow
In this work, we instead treat source code as a dynamic object and tackle the problem of modeling the edits that software developers make to source code files.
2 code implementations • ICLR 2019 • Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, Rishabh Singh
We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.
no code implementations • 19 May 2017 • Sergio Guadarrama, Ryan Dahl, David Bieber, Mohammad Norouzi, Jonathon Shlens, Kevin Murphy
Then, given the generated low-resolution color image and the original grayscale image as inputs, we train a second CNN to generate a high-resolution colorization of an image.
Ranked #3 on
Colorization
on ImageNet val