no code implementations • 16 Jan 2013 • Ian J. Goodfellow, Aaron Courville, Yoshua Bengio
We introduce a new method for training deep Boltzmann machines jointly.
7 code implementations • 18 Feb 2013 • Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio
We consider the problem of designing models to leverage a recently introduced approximate model averaging technique called dropout.
Ranked #34 on Image Classification on MNIST
11 code implementations • 1 Jul 2013 • Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, Yoshua Bengio
The ICML 2013 Workshop on Challenges in Representation Learning focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge.
Ranked #12 on Facial Expression Recognition (FER) on FER2013
6 code implementations • 20 Aug 2013 • Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Frédéric Bastien, Yoshua Bengio
Pylearn2 is a machine learning research library.
no code implementations • 18 Dec 2013 • Vincent Dumoulin, Ian J. Goodfellow, Aaron Courville, Yoshua Bengio
Restricted Boltzmann machines (RBMs) are powerful machine learning models, but learning and some kinds of inference in the model require sampling-based approximations, which, in classical digital computers, are implemented using expensive MCMC.
17 code implementations • 20 Dec 2013 • Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet
In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural network that operates directly on the image pixels.
Ranked #30 on Image Classification on SVHN
1 code implementation • 21 Dec 2013 • Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, Yoshua Bengio
Catastrophic forgetting is a problem faced by many machine learning models and algorithms.
no code implementations • 21 Dec 2013 • David Warde-Farley, Ian J. Goodfellow, Aaron Courville, Yoshua Bengio
The recently introduced dropout training criterion for neural networks has been the subject of much attention due to its simplicity and remarkable effectiveness as a regularizer, as well as its interpretation as a training procedure for an exponentially large ensemble of networks that share parameters.
183 code implementations • Proceedings of the 27th International Conference on Neural Information Processing Systems 2014 • Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.
Super-Resolution Time-Series Few-Shot Learning with Heterogeneous Channels
1 code implementation • 19 Dec 2014 • Ian J. Goodfellow
However, we show that recovering MLE for a learned generator requires departing from the distinguishability game.
1 code implementation • 19 Dec 2014 • Ian J. Goodfellow, Oriol Vinyals, Andrew M. Saxe
Training neural networks involves solving large-scale non-convex optimization problems.
59 code implementations • 20 Dec 2014 • Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence.
Ranked #57 on Image Classification on MNIST
7 code implementations • NeurIPS 2018 • Avital Oliver, Augustus Odena, Colin Raffel, Ekin D. Cubuk, Ian J. Goodfellow
However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications.