no code implementations • 27 Jul 2021 • Open Ended Learning Team, Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michael Mathieu, Nat McAleese, Nathalie Bradley-Schmieg, Nathaniel Wong, Nicolas Porcel, Roberta Raileanu, Steph Hughes-Fitt, Valentin Dalibard, Wojciech Marian Czarnecki
The resulting space is exceptionally diverse in terms of the challenges posed to agents, and as such, even measuring the learning progress of an agent is an open research problem.
3 code implementations • 10 Nov 2016 • Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann Lecun
During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class.
3 code implementations • 11 Sep 2016 • Junbo Zhao, Michael Mathieu, Yann Lecun
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions.
5 code implementations • 17 Nov 2015 • Michael Mathieu, Camille Couprie, Yann Lecun
Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics.
no code implementations • NeurIPS 2015 • Ross Goroshin, Michael Mathieu, Yann Lecun
Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision.
2 code implementations • 8 Jun 2015 • Junbo Zhao, Michael Mathieu, Ross Goroshin, Yann Lecun
The objective function includes reconstruction terms that induce the hidden states in the Deconvnet to be similar to those of the Convnet.
5 code implementations • 24 Dec 2014 • Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, Marc'Aurelio Ranzato
In this paper, we show that learning longer term patterns in real data, such as in natural language, is perfectly possible using gradient descent.
2 code implementations • 24 Dec 2014 • Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann Lecun
We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units.
1 code implementation • 20 Dec 2014 • MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, Sumit Chopra
We propose a strong baseline model for unsupervised feature learning using video data.
1 code implementation • 30 Nov 2014 • Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, Yann Lecun
We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum.
no code implementations • 29 Apr 2014 • Michael Mathieu, Yann Lecun
A new method to represent and approximate rotation matrices is introduced.
4 code implementations • 21 Dec 2013 • Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann Lecun
This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks.
no code implementations • 20 Dec 2013 • Michael Mathieu, Mikael Henaff, Yann Lecun
Convolutional networks are one of the most widely employed architectures in computer vision and machine learning.
no code implementations • NeurIPS 2010 • Koray Kavukcuoglu, Pierre Sermanet, Y-Lan Boureau, Karol Gregor, Michael Mathieu, Yann L. Cun
We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features.