no code implementations • 25 Feb 2022 • Sergey Bartunov, Fabian B. Fuchs, Timothy Lillicrap
Processing sets or other unordered, potentially variable-sized inputs in neural networks is usually handled by aggregating a number of input tensors into a single representation.
1 code implementation • 21 Jul 2021 • Nicolas Sonnerat, Pengming Wang, Ira Ktena, Sergey Bartunov, Vinod Nair
Large Neighborhood Search (LNS) is a combinatorial optimization heuristic that starts with an assignment of values for the variables to be optimized, and iteratively improves it by searching a large neighborhood around the current assignment.
no code implementations • NeurIPS 2021 • Yaroslav Ganin, Sergey Bartunov, Yujia Li, Ethan Keller, Stefano Saliceti
Computer-Aided Design (CAD) applications are used in manufacturing to model everything from coffee mugs to sports cars.
1 code implementation • 23 Dec 2020 • Vinod Nair, Sergey Bartunov, Felix Gimeno, Ingrid von Glehn, Pawel Lichocki, Ivan Lobov, Brendan O'Donoghue, Nicolas Sonnerat, Christian Tjandraatmadja, Pengming Wang, Ravichandra Addanki, Tharindi Hapuarachchi, Thomas Keck, James Keeling, Pushmeet Kohli, Ira Ktena, Yujia Li, Oriol Vinyals, Yori Zwols
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
no code implementations • NeurIPS Workshop LMCA 2020 • Sergey Bartunov, Vinod Nair, Peter Battaglia, Tim Lillicrap
Combinatorial optimization problems are notoriously hard because they often require enumeration of the exponentially large solution space.
no code implementations • ICLR 2020 • Sergey Bartunov, Jack W. Rae, Simon Osindero, Timothy P. Lillicrap
We study the problem of learning associative memory -- a system which is able to retrieve a remembered pattern based on its distorted or incomplete version.
no code implementations • ICLR 2019 • Jack W. Rae, Sergey Bartunov, Timothy P. Lillicrap
There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression.
1 code implementation • NeurIPS 2018 • Sergey Bartunov, Adam Santoro, Blake A. Richards, Luke Marris, Geoffrey E. Hinton, Timothy Lillicrap
Here we present results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance.
1 code implementation • 22 Nov 2017 • Oleg Ivanov, Sergey Bartunov
The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.
11 code implementations • 16 Aug 2017 • Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing
Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain.
Ranked #1 on Starcraft II on MoveToBeacon
no code implementations • 7 Dec 2016 • Sergey Bartunov, Dmitry P. Vetrov
Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples.
11 code implementations • 19 May 2016 • Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap
Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning."
3 code implementations • 25 Feb 2015 • Sergey Bartunov, Dmitry Kondrashkin, Anton Osokin, Dmitry Vetrov
Recently proposed Skip-gram model is a powerful method for learning high-dimensional word representations that capture rich semantic relationships between words.