no code implementations • 6 Dec 2021 • Michael Schaarschmidt, Dominik Grewe, Dimitrios Vytiniotis, Adam Paszke, Georg Stefan Schmid, Tamara Norman, James Molloy, Jonathan Godwin, Norman Alexander Rink, Vinod Nair, Dan Belov
The rapid rise in demand for training large neural network architectures has brought into focus the need for partitioning strategies, for example by using data, model, or pipeline parallelism.
1 code implementation • 21 Jul 2021 • Nicolas Sonnerat, Pengming Wang, Ira Ktena, Sergey Bartunov, Vinod Nair
Large Neighborhood Search (LNS) is a combinatorial optimization heuristic that starts with an assignment of values for the variables to be optimized, and iteratively improves it by searching a large neighborhood around the current assignment.
1 code implementation • 23 Dec 2020 • Vinod Nair, Sergey Bartunov, Felix Gimeno, Ingrid von Glehn, Pawel Lichocki, Ivan Lobov, Brendan O'Donoghue, Nicolas Sonnerat, Christian Tjandraatmadja, Pengming Wang, Ravichandra Addanki, Tharindi Hapuarachchi, Thomas Keck, James Keeling, Pushmeet Kohli, Ira Ktena, Yujia Li, Oriol Vinyals, Yori Zwols
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
no code implementations • NeurIPS Workshop LMCA 2020 • Sergey Bartunov, Vinod Nair, Peter Battaglia, Tim Lillicrap
Combinatorial optimization problems are notoriously hard because they often require enumeration of the exponentially large solution space.
no code implementations • NeurIPS Workshop LMCA 2020 • Ravichandra Addanki, Vinod Nair, Mohammad Alizadeh
Results on several datasets show that it is possible to learn a neighbor selection policy that allows LNS to efficiently find good solutions.
no code implementations • 4 Dec 2019 • Xujie Si, Yujia Li, Vinod Nair, Felix Gimeno
We share this observation in the hope that it helps the SAT community better understand the hardness of random instances used in competitions and inspire other interesting ideas on SAT solving.
no code implementations • ICLR 2020 • Aditya Paliwal, Felix Gimeno, Vinod Nair, Yujia Li, Miles Lubin, Pushmeet Kohli, Oriol Vinyals
We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler.
no code implementations • 10 Nov 2013 • Vinod Nair, Rahul Kidambi, Sundararajan Sellamanickam, S. Sathiya Keerthi, Johannes Gehrke, Vijay Narayanan
We consider the problem of quantitatively evaluating missing value imputation algorithms.
no code implementations • 9 Nov 2013 • Rahul Kidambi, Vinod Nair, Sundararajan Sellamanickam, S. Sathiya Keerthi
In this paper we propose a structured output approach for missing value imputation that also incorporates domain constraints.
no code implementations • NeurIPS 2009 • Vinod Nair, Geoffrey E. Hinton
Our model achieves 6. 5% error on the test set, which is close to the best published result for NORB (5. 9%) using a convolutional neural net that has built-in knowledge of translation invariance.
no code implementations • NeurIPS 2008 • Vinod Nair, Geoffrey E. Hinton
We present a mixture model whose components are Restricted Boltzmann Machines (RBMs).