Graph representations of programs are commonly a central element of machine learning for code research.
1 code implementation • 21 Jul 2022 • David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-Dickstein, Kevin Murphy, Charles Sutton
Prompted models have demonstrated impressive few-shot learning abilities.
This presents an interesting machine learning challenge: can we predict runtime errors in a "static" setting, where program execution is not possible?
We evaluate Design2Vec on three real-world hardware designs, including an industrial chip used in commercial data centers.
no code implementations • 30 Nov 2021 • Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena
Large pre-trained language models perform remarkably well on tasks that can be done "in one pass", such as generating realistic text or synthesizing computer programs.
More practically, we evaluate these models on the task of learning to execute partial programs, as might arise if using the model as a heuristic function in program synthesis.
Program synthesis is challenging largely because of the difficulty of search in a large space of programs.
By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.
The success and popularity of deep learning is on the rise, partially due to powerful deep learning frameworks such as TensorFlow and PyTorch that make it easier to develop deep learning models.
Sampling is a fundamental technique, and sampling without replacement is often desirable when duplicate samples are not beneficial.
In this work, we instead treat source code as a dynamic object and tackle the problem of modeling the edits that software developers make to source code files.
We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.
Then, given the generated low-resolution color image and the original grayscale image as inputs, we train a second CNN to generate a high-resolution colorization of an image.
Ranked #3 on Colorization on ImageNet val