1 code implementation • NeurIPS 2023 • Leon Klein, Andrew Y. K. Foong, Tor Erlend Fjelde, Bruno Mlodozeniec, Marc Brockschmidt, Sebastian Nowozin, Frank Noé, Ryota Tomioka
Molecular dynamics (MD) simulation is a widely used technique to simulate molecular systems, most commonly at the all-atom resolution where equations of motion are integrated with timesteps on the order of femtoseconds ($1\textrm{fs}=10^{-15}\textrm{s}$).
1 code implementation • 14 Jun 2022 • Chencheng Liang, Philipp Rümmer, Marc Brockschmidt
For the second challenge, we explore graph representations of CHCs, and propose a new Relational Hypergraph Neural Network (R-HyGNN) architecture to learn program features.
no code implementations • 28 Jan 2022 • Dobrik Georgiev, Marc Brockschmidt, Miltiadis Allamanis
Learning from structured data is a core machine learning task.
no code implementations • ICLR 2022 • Daya Guo, Alexey Svyatkovskiy, Jian Yin, Nan Duan, Marc Brockschmidt, Miltiadis Allamanis
To evaluate models, we consider both ROUGE as well as a new metric RegexAcc that measures success of generating completions matching long outputs with as few holes as possible.
1 code implementation • NeurIPS 2021 • Miltiadis Allamanis, Henry Jackson-Flux, Marc Brockschmidt
Machine learning-based program analyses have recently shown the promise of integrating formal and probabilistic reasoning towards aiding software development.
3 code implementations • ICLR 2022 • Krzysztof Maziarz, Henry Jackson-Flux, Pashmina Cameron, Finton Sirockin, Nadine Schneider, Nikolaus Stiefl, Marwin Segler, Marc Brockschmidt
Recent advancements in deep learning-based modeling of molecules promise to accelerate in silico drug discovery.
1 code implementation • 8 Jun 2020 • Sheena Panthaplackel, Miltiadis Allamanis, Marc Brockschmidt
Neural sequence-to-sequence models are finding increasing use in editing of documents, for example in correcting a text document or repairing source code.
no code implementations • 17 Dec 2019 • Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, Marc Brockschmidt
To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models.
no code implementations • 12 Oct 2019 • Niklas Stoehr, Emine Yilmaz, Marc Brockschmidt, Jan Stuehmer
While a wide range of interpretable generative procedures for graphs exist, matching observed graph topologies with such procedures and choices for its parameters remains an open problem.
no code implementations • 25 Sep 2019 • Shruti Tople, Marc Brockschmidt, Boris Köpf, Olga Ohrimenko, Santiago Zanella-Béguelin
To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models.
14 code implementations • 20 Sep 2019 • Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, Marc Brockschmidt
To enable evaluation of progress on code search, we are releasing the CodeSearchNet Corpus and are presenting the CodeSearchNet Challenge, which consists of 99 natural language queries with about 4k expert relevance annotations of likely results from CodeSearchNet Corpus.
2 code implementations • ICML 2020 • Marc Brockschmidt
Results of experiments comparing different GNN architectures on three tasks from the literature are presented, based on re-implementations of baseline methods.
1 code implementation • NeurIPS 2019 • Richard Shin, Miltiadis Allamanis, Marc Brockschmidt, Oleksandr Polozov
Program synthesis of general-purpose source code from natural language specifications is challenging due to the need to reason about high-level patterns in the target program and low-level implementation details at the same time.
3 code implementations • ICLR 2019 • Patrick Fernandes, Miltiadis Allamanis, Marc Brockschmidt
Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input.
2 code implementations • ICLR 2019 • Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander L. Gaunt
We introduce the problem of learning distributed representations of edits.
1 code implementation • 9 Jul 2018 • Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Oleksandr Polozov, Rishabh Singh
We consider the problem of neural semantic parsing, which translates natural language questions into executable SQL queries.
1 code implementation • NeurIPS 2018 • Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, Alexander L. Gaunt
Graphs are ubiquitous data structures for representing interactions between entities.
1 code implementation • ICLR 2019 • Marc Brockschmidt, Miltiadis Allamanis, Alexander L. Gaunt, Oleksandr Polozov
Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs.
1 code implementation • ICLR 2018 • Renjie Liao, Marc Brockschmidt, Daniel Tarlow, Alexander L. Gaunt, Raquel Urtasun, Richard Zemel
We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs.
no code implementations • ICLR 2018 • Chenglong Wang, Marc Brockschmidt, Rishabh Singh
We present a system that allows for querying data tables using natural language questions, where the system translates the question into an executable SQL query.
2 code implementations • ICLR 2018 • Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi
Learning tasks on source code (i. e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax.
no code implementations • 22 May 2017 • Miltiadis Allamanis, Marc Brockschmidt
As first solutions, we design a set of deep neural models that learn to represent the context of each variable location and variable usage in a data flow-sensitive way.
no code implementations • 2 Dec 2016 • Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow
A TerpreT model is composed of a specification of a program representation and an interpreter that describes how programs map inputs to outputs.
1 code implementation • 7 Nov 2016 • John K. Feser, Marc Brockschmidt, Alexander L. Gaunt, Daniel Tarlow
Recent work on differentiable interpreters relaxes the discrete space of programs into a continuous space so that search over programs can be performed using gradient-based optimization.
no code implementations • ICML 2017 • Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow
We develop a framework for combining differentiable programming languages with neural networks.
3 code implementations • 7 Nov 2016 • Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow
We develop a first line of attack for solving programming competition-style problems from input-output examples using deep learning.
no code implementations • 15 Aug 2016 • Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow
TerpreT is similar to a probabilistic programming language: a model is composed of a specification of a program representation (declarations of random variables) and an interpreter describing how programs map inputs to outputs (a model connecting unknowns to observations).
13 code implementations • 17 Nov 2015 • Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard Zemel
Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases.
Ranked #1 on Graph Classification on IPC-grounded