Search Results for author: Ilya Sutskever

Found 70 papers, 48 papers with code

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision

no code implementations14 Dec 2023 Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, Jeff Wu

Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior - for example, to evaluate whether a model faithfully followed instructions or generated safe outputs.

Let's Verify Step by Step

3 code implementations Preprint 2023 Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe

We conduct our own investigation, finding that process supervision significantly outperforms outcome supervision for training models to solve problems from the challenging MATH dataset.

 Ranked #1 on Math Word Problem Solving on MATH minival (using extra training data)

Active Learning Math +2

GPT-4 Technical Report

9 code implementations Preprint 2023 OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O'Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, Barret Zoph

We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.

 Ranked #1 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (using extra training data)

Arithmetic Reasoning Bug fixing +9

Consistency Models

8 code implementations2 Mar 2023 Yang song, Prafulla Dhariwal, Mark Chen, Ilya Sutskever

Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3. 55 on CIFAR-10 and 6. 20 on ImageNet 64x64 for one-step generation.

Colorization Image Inpainting +2

Robust Speech Recognition via Large-Scale Weak Supervision

8 code implementations Preprint 2022 Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever

We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet.

 Ranked #1 on Speech Recognition on Common Voice Italian (using extra training data)

Robust Speech Recognition speech-recognition

Formal Mathematics Statement Curriculum Learning

1 code implementation3 Feb 2022 Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, Ilya Sutskever

We explore the use of expert iteration in the context of language modeling applied to formal mathematics.

Ranked #3 on Automated Theorem Proving on miniF2F-test (using extra training data)

Automated Theorem Proving Language Modelling

GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

2 code implementations20 Dec 2021 Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, Mark Chen

Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity.

Ranked #33 on Text-to-Image Generation on MS COCO (using extra training data)

Image Inpainting Zero-Shot Text-to-Image Generation

Zero-Shot Text-to-Image Generation

12 code implementations24 Feb 2021 Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever

Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset.

Ranked #45 on Text-to-Image Generation on MS COCO (using extra training data)

Zero-Shot Text-to-Image Generation

Generative Pretraining from Pixels

4 code implementations ICML 2020 Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, Ilya Sutskever

Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.

Ranked #15 on Image Classification on STL-10 (using extra training data)

Representation Learning Self-Supervised Image Classification

Deep Double Descent: Where Bigger Models and More Data Hurt

3 code implementations ICLR 2020 Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, Ilya Sutskever

We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better.

Language Models are Unsupervised Multitask Learners

15 code implementations Preprint 2019 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 Ranked #1 on Language Modelling on enwik8 (using extra training data)

Common Sense Reasoning Coreference Resolution +10

The Importance of Sampling inMeta-Reinforcement Learning

no code implementations NeurIPS 2018 Bradly Stadie, Ge Yang, Rein Houthooft, Peter Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, Ilya Sutskever

Results are presented on a new environment we call `Krazy World': a difficult high-dimensional gridworld which is designed to highlight the importance of correctly differentiating through sampling distributions in meta-reinforcement learning.

Meta Reinforcement Learning reinforcement-learning +1

FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models

7 code implementations ICLR 2019 Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, David Duvenaud

The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures.

Density Estimation Image Generation +1

Improving Language Understanding by Generative Pre-Training

11 code implementations Preprint 2018 Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever

We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task.

Cloze Test Document Classification +6

GamePad: A Learning Environment for Theorem Proving

1 code implementation ICLR 2019 Daniel Huang, Prafulla Dhariwal, Dawn Song, Ilya Sutskever

In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.

Automated Theorem Proving Position

Generative Models for Alignment and Data Efficiency in Language

no code implementations ICLR 2018 Dustin Tran, Yura Burda, Ilya Sutskever

We examine how learning from unaligned data can improve both the data efficiency of supervised tasks as well as enable alignments without any supervision.

Decipherment Translation +1

Emergent Complexity via Multi-Agent Competition

2 code implementations ICLR 2018 Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, Igor Mordatch

In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself.

Blocking

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

1 code implementation ICLR 2018 Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, Pieter Abbeel

Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence.

Meta-Learning

An online sequence-to-sequence model for noisy speech recognition

no code implementations16 Jun 2017 Chung-Cheng Chiu, Dieterich Lawson, Yuping Luo, George Tucker, Kevin Swersky, Ilya Sutskever, Navdeep Jaitly

This is because the models require that the entirety of the input sequence be available at the beginning of inference, an assumption that is not valid for instantaneous speech recognition.

Noisy Speech Recognition speech-recognition

One-Shot Imitation Learning

no code implementations NeurIPS 2017 Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, Wojciech Zaremba

A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration.

Feature Engineering Imitation Learning +1

Evolution Strategies as a Scalable Alternative to Reinforcement Learning

23 code implementations10 Mar 2017 Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever

We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients.

Atari Games Q-Learning +2

Third-Person Imitation Learning

1 code implementation6 Mar 2017 Bradly C. Stadie, Pieter Abbeel, Ilya Sutskever

A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize.

Imitation Learning reinforcement-learning +1

Improved Variational Inference with Inverse Autoregressive Flow

2 code implementations NeurIPS 2016 Durk P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling

The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables.

Ranked #40 on Image Generation on CIFAR-10 (bits/dimension metric)

Image Generation Variational Inference

An Online Sequence-to-Sequence Model Using Partial Conditioning

1 code implementation NeurIPS 2016 Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Ilya Sutskever, David Sussillo, Samy Bengio

However, they are unsuitable for tasks that require incremental predictions to be made as more data arrives or tasks that have long input sequences and output sequences.

Variational Lossy Autoencoder

no code implementations8 Nov 2016 Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, Pieter Abbeel

Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification.

Density Estimation Image Generation +1

Extensions and Limitations of the Neural GPU

1 code implementation2 Nov 2016 Eric Price, Wojciech Zaremba, Ilya Sutskever

We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before).

Learning Online Alignments with Continuous Rewards Policy Gradient

no code implementations3 Aug 2016 Yuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, Ilya Sutskever

Though capable and easy to use, they require that the entirety of the input sequence is available at the beginning of inference, an assumption that is not valid for instantaneous translation and speech recognition.

Machine Translation Question Answering +4

Improving Variational Inference with Inverse Autoregressive Flow

8 code implementations15 Jun 2016 Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling

The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables.

Variational Inference

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

37 code implementations NeurIPS 2016 Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, Pieter Abbeel

This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.

Generative Adversarial Network Image Generation +3

Continuous Deep Q-Learning with Model-based Acceleration

8 code implementations2 Mar 2016 Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, Sergey Levine

In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks.

Continuous Control Q-Learning +2

Neural GPUs Learn Algorithms

5 code implementations25 Nov 2015 Łukasz Kaiser, Ilya Sutskever

Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run.

Adding Gradient Noise Improves Learning for Very Deep Networks

4 code implementations21 Nov 2015 Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens

This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks.

Question Answering

Towards Principled Unsupervised Learning

no code implementations19 Nov 2015 Ilya Sutskever, Rafal Jozefowicz, Karol Gregor, Danilo Rezende, Tim Lillicrap, Oriol Vinyals

Supervised learning is successful because it can be solved by the minimization of the training error cost function.

Domain Adaptation

Multi-task Sequence to Sequence Learning

no code implementations19 Nov 2015 Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, Lukasz Kaiser

This paper examines three multi-task learning (MTL) settings for sequence to sequence models: (a) the oneto-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation.

Machine Translation Multi-Task Learning +1

Neural Random-Access Machines

no code implementations19 Nov 2015 Karol Kurach, Marcin Andrychowicz, Ilya Sutskever

In this paper, we propose and investigate a new neural network architecture called Neural Random Access Machine.

Neural Programmer: Inducing Latent Programs with Gradient Descent

no code implementations16 Nov 2015 Arvind Neelakantan, Quoc V. Le, Ilya Sutskever

In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations.

Question Answering speech-recognition +1

A Neural Transducer

no code implementations16 Nov 2015 Navdeep Jaitly, David Sussillo, Quoc V. Le, Oriol Vinyals, Ilya Sutskever, Samy Bengio

However, they are unsuitable for tasks that require incremental predictions to be made as more data arrives or tasks that have long input sequences and output sequences.

MuProp: Unbiased Backpropagation for Stochastic Neural Networks

2 code implementations16 Nov 2015 Shixiang Gu, Sergey Levine, Ilya Sutskever, andriy mnih

Deep neural networks are powerful parametric models that can be trained efficiently using the backpropagation algorithm.

Reinforcement Learning Neural Turing Machines - Revised

1 code implementation4 May 2015 Wojciech Zaremba, Ilya Sutskever

The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world.

reinforcement-learning Reinforcement Learning (RL)

Grammar as a Foreign Language

8 code implementations NeurIPS 2015 Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey Hinton

Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades.

Constituency Parsing

Move Evaluation in Go Using Deep Convolutional Neural Networks

1 code implementation20 Dec 2014 Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver

The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function.

Game of Go Position

Addressing the Rare Word Problem in Neural Machine Translation

5 code implementations IJCNLP 2015 Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, Wojciech Zaremba

Our experiments on the WMT14 English to French translation task show that this method provides a substantial improvement of up to 2. 8 BLEU points over an equivalent NMT system that does not use this technique.

Machine Translation NMT +3

Learning to Execute

6 code implementations17 Oct 2014 Wojciech Zaremba, Ilya Sutskever

Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are widely used because they are expressive and are easy to train.

Learning to Execute

Sequence to Sequence Learning with Neural Networks

73 code implementations NeurIPS 2014 Ilya Sutskever, Oriol Vinyals, Quoc V. Le

Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.

Ranked #4 on Traffic Prediction on PeMS-M (using extra training data)

Machine Translation Sentence +2

Recurrent Neural Network Regularization

21 code implementations8 Sep 2014 Wojciech Zaremba, Ilya Sutskever, Oriol Vinyals

We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units.

Image Captioning Language Modelling +3

Intriguing properties of neural networks

12 code implementations21 Dec 2013 Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus

Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks.

Learning Factored Representations in a Deep Mixture of Experts

no code implementations16 Dec 2013 David Eigen, Marc'Aurelio Ranzato, Ilya Sutskever

In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones.

Distributed Representations of Words and Phrases and their Compositionality

51 code implementations NeurIPS 2013 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean

Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.

Exploiting Similarities among Languages for Machine Translation

8 code implementations17 Sep 2013 Tomas Mikolov, Quoc V. Le, Ilya Sutskever

Dictionaries and phrase tables are the basis of modern statistical machine translation systems.

Machine Translation Translation

On the importance of initialization and momentum in deep learning

no code implementations Proceedings of the 30th International Conference on Machine Learning 2013 Ilya Sutskever, James Martens, George Dahl, Geoffrey Hinton

Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum.

Second-order methods

ImageNet Classification with Deep Convolutional Neural Networks

19 code implementations NeurIPS 2012 Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton

We trained a large, deep convolutional neural network to classify the 1. 3 million high-resolution images in the LSVRC-2010 ImageNet training set into the 1000 different classes.

General Classification Graph Classification +1

Cardinality Restricted Boltzmann Machines

no code implementations NeurIPS 2012 Kevin Swersky, Ilya Sutskever, Daniel Tarlow, Richard S. Zemel, Ruslan R. Salakhutdinov, Ryan P. Adams

The Restricted Boltzmann Machine (RBM) is a popular density model that is also good for extracting features.

Modelling Relational Data using Bayesian Clustered Tensor Factorization

no code implementations NeurIPS 2009 Ilya Sutskever, Joshua B. Tenenbaum, Ruslan R. Salakhutdinov

We consider the problem of learning probabilistic models for complex relational structures between various types of objects.

Clustering

Using matrices to model symbolic relationship

no code implementations NeurIPS 2008 Ilya Sutskever, Geoffrey E. Hinton

We describe a way of learning matrix representations of objects and relationships.

The Recurrent Temporal Restricted Boltzmann Machine

no code implementations NeurIPS 2008 Ilya Sutskever, Geoffrey E. Hinton, Graham W. Taylor

The Temporal Restricted Boltzmann Machine (TRBM) is a probabilistic model for sequences that is able to successfully model (i. e., generate nice-looking samples of) several very high dimensional sequences, such as motion capture data and the pixels of low resolution videos of balls bouncing in a box.

Cannot find the paper you are looking for? You can Submit a new open access paper.