no code implementations • 26 Sep 2022 • Michael Schaarschmidt, Morgane Riviere, Alex M. Ganose, James S. Spencer, Alexander L. Gaunt, James Kirkpatrick, Simon Axelrod, Peter W. Battaglia, Jonathan Godwin

We present evidence that learned density functional theory (``DFT'') force fields are ready for ground state catalyst discovery.

2 code implementations • ICLR 2019 • Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander L. Gaunt

We introduce the problem of learning distributed representations of edits.

3 code implementations • ICLR 2019 • Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E. Turner, José Miguel Hernández-Lobato, Alexander L. Gaunt

We provide two innovations that aim to turn VB into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel Empirical Bayes procedure for automatically selecting prior variances.

1 code implementation • NeurIPS 2018 • Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, Alexander L. Gaunt

Graphs are ubiquitous data structures for representing interactions between entities.

1 code implementation • ICLR 2019 • Marc Brockschmidt, Miltiadis Allamanis, Alexander L. Gaunt, Oleksandr Polozov

Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs.

1 code implementation • ICLR 2018 • Renjie Liao, Marc Brockschmidt, Daniel Tarlow, Alexander L. Gaunt, Raquel Urtasun, Richard Zemel

We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs.

1 code implementation • ICLR 2018 • Alexander L. Gaunt, Matthew A. Johnson, Maik Riechert, Daniel Tarlow, Ryota Tomioka, Dimitrios Vytiniotis, Sam Webster

Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently even for small minibatch sizes, resulting in significantly shorter overall training times.

no code implementations • 2 Dec 2016 • Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow

A TerpreT model is composed of a specification of a program representation and an interpreter that describes how programs map inputs to outputs.

3 code implementations • 7 Nov 2016 • Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow

We develop a first line of attack for solving programming competition-style problems from input-output examples using deep learning.

no code implementations • ICML 2017 • Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow

We develop a framework for combining differentiable programming languages with neural networks.

1 code implementation • 7 Nov 2016 • John K. Feser, Marc Brockschmidt, Alexander L. Gaunt, Daniel Tarlow

Recent work on differentiable interpreters relaxes the discrete space of programs into a continuous space so that search over programs can be performed using gradient-based optimization.

no code implementations • 15 Aug 2016 • Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow

TerpreT is similar to a probabilistic programming language: a model is composed of a specification of a program representation (declarations of random variables) and an interpreter describing how programs map inputs to outputs (a model connecting unknowns to observations).

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.