13 code implementations • 7 Jul 2021 • Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba
We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities.
Ranked #1 on
Multi-task Language Understanding
on BBH-alg
1 code implementation • 2 Jun 2021 • Diogo Almeida, Clemens Winter, Jie Tang, Wojciech Zaremba
A core issue with learning to optimize neural networks has been the lack of generalization to real world problems.
no code implementations • 13 Jan 2021 • OpenAI OpenAI, Matthias Plappert, Raul Sampedro, Tao Xu, Ilge Akkaya, Vineet Kosaraju, Peter Welinder, Ruben D'Sa, Arthur Petron, Henrique Ponde de Oliveira Pinto, Alex Paino, Hyeonwoo Noh, Lilian Weng, Qiming Yuan, Casey Chu, Wojciech Zaremba
We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects.
no code implementations • 27 Sep 2020 • Lei M. Zhang, Matthias Plappert, Wojciech Zaremba
We further show that the transfer metric can predict the effect of training setups on policy transfer performance.
2 code implementations • 16 Oct 2019 • OpenAI, Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, Jonas Schneider, Nikolas Tezak, Jerry Tworek, Peter Welinder, Lilian Weng, Qiming Yuan, Wojciech Zaremba, Lei Zhang
We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot.
no code implementations • 1 Aug 2018 • OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, Wojciech Zaremba
We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand.
30 code implementations • 26 Feb 2018 • Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, Vikash Kumar, Wojciech Zaremba
The purpose of this technical report is two-fold.
no code implementations • 18 Oct 2017 • Lerrel Pinto, Marcin Andrychowicz, Peter Welinder, Wojciech Zaremba, Pieter Abbeel
While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator.
no code implementations • 18 Oct 2017 • Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel
By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained.
Robotics Systems and Control
no code implementations • 17 Oct 2017 • Joshua Tobin, Lukas Biewald, Rocky Duan, Marcin Andrychowicz, Ankur Handa, Vikash Kumar, Bob McGrew, Jonas Schneider, Peter Welinder, Wojciech Zaremba, Pieter Abbeel
In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis.
3 code implementations • 28 Sep 2017 • Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel
Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL).
26 code implementations • NeurIPS 2017 • Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL).
no code implementations • NeurIPS 2017 • Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, Wojciech Zaremba
A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration.
6 code implementations • 20 Mar 2017 • Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, Pieter Abbeel
Bridging the 'reality gap' that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability.
1 code implementation • 2 Nov 2016 • Eric Price, Wojciech Zaremba, Ilya Sutskever
We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before).
no code implementations • 11 Oct 2016 • Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, Wojciech Zaremba
Nevertheless, often the overall gist of what the policy does in simulation remains valid in the real world.
45 code implementations • NeurIPS 2016 • Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
Ranked #14 on
Conditional Image Generation
on CIFAR-10
(Inception score metric)
Conditional Image Generation
Semi-Supervised Image Classification
+1
44 code implementations • 5 Jun 2016 • Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba
OpenAI Gym is a toolkit for reinforcement learning research.
1 code implementation • 23 Nov 2015 • Wojciech Zaremba, Tomas Mikolov, Armand Joulin, Rob Fergus
We present an approach for learning simple algorithms such as copying, multi-digit addition and single digit multiplication directly from examples.
5 code implementations • 20 Nov 2015 • Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba
Many natural language processing applications use language models to generate text.
Ranked #14 on
Machine Translation
on IWSLT2015 German-English
no code implementations • 26 Jun 2015 • Mark Tygert, Arthur Szlam, Soumith Chintala, Marc'Aurelio Ranzato, Yuandong Tian, Wojciech Zaremba
The conventional classification schemes -- notably multinomial logistic regression -- used in conjunction with convolutional networks (convnets) are classical in statistics, designed without consideration for the usual coupling with convnets, stochastic gradient descent, and backpropagation.
1 code implementation • 4 May 2015 • Wojciech Zaremba, Ilya Sutskever
The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world.
5 code implementations • IJCNLP 2015 • Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, Wojciech Zaremba
Our experiments on the WMT14 English to French translation task show that this method provides a substantial improvement of up to 2. 8 BLEU points over an equivalent NMT system that does not use this technique.
Ranked #40 on
Machine Translation
on WMT2014 English-French
6 code implementations • 17 Oct 2014 • Wojciech Zaremba, Ilya Sutskever
Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are widely used because they are expressive and are easy to train.
20 code implementations • 8 Sep 2014 • Wojciech Zaremba, Ilya Sutskever, Oriol Vinyals
We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units.
Ranked #35 on
Language Modelling
on Penn Treebank (Word Level)
1 code implementation • NeurIPS 2014 • Wojciech Zaremba, Karol Kurach, Rob Fergus
In this paper we explore how machine learning techniques can be applied to the discovery of efficient mathematical identities.
no code implementations • NeurIPS 2014 • Emily Denton, Wojciech Zaremba, Joan Bruna, Yann Lecun, Rob Fergus
We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks.
11 code implementations • 21 Dec 2013 • Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks.
4 code implementations • 21 Dec 2013 • Joan Bruna, Wojciech Zaremba, Arthur Szlam, Yann Lecun
Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain.
no code implementations • NeurIPS 2013 • Wojciech Zaremba, Arthur Gretton, Matthew Blaschko
We propose a family of maximum mean discrepancy (MMD) kernel two-sample tests that have low sample complexity and are consistent.
1 code implementation • 8 Jul 2013 • Wojciech Zaremba, Arthur Gretton, Matthew Blaschko
A family of maximum mean discrepancy (MMD) kernel two-sample tests is introduced.