no code implementations • 27 Feb 2020 • Daniel A. Abolafia, Rishabh Singh, Manzil Zaheer, Charles Sutton
Main consists of a neural controller that interacts with a variable-length input tape and learns to compose modules together with their corresponding argument choices.
no code implementations • ICLR 2019 • Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein
There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs).
no code implementations • 11 Oct 2018 • Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein
There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs).
no code implementations • ICLR 2018 • Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein
In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models.
4 code implementations • 10 Jan 2018 • Daniel A. Abolafia, Mohammad Norouzi, Jonathan Shen, Rui Zhao, Quoc V. Le
Models and examples built with TensorFlow
no code implementations • ICLR 2018 • Daniel A. Abolafia, Quoc V. Le, Mohammad Norouzi
We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards.