no code implementations • 15 Oct 2019 • Antoine Saporta, Yifu Chen, Michael Blot, Matthieu Cord
Studies on generalization performance of machine learning algorithms under the scope of information theory suggest that compressed representations can guarantee good generalization, inspiring many compression-based regularization methods.
no code implementations • 7 Aug 2019 • Martin Mihelich, Charles Dognin, Yan Shu, Michael Blot
First, we prove that for any estimator, increasing the number of bagged estimators $N$ in the average can only reduce the MSE.
1 code implementation • 14 May 2018 • Michael Blot, Thomas Robert, Nicolas Thome, Matthieu Cord
Regularization is a big issue for training deep neural networks.
no code implementations • 29 Apr 2018 • Michael Blot, Thomas Robert, Nicolas Thome, Matthieu Cord
Regularization is a big issue for training deep neural networks.
no code implementations • 4 Apr 2018 • Michael Blot, David Picard, Matthieu Cord
We address the issue of speeding up the training of convolutional neural networks by studying a distributed method adapted to stochastic gradient descent.
no code implementations • ICLR 2018 • Michael Blot, Thomas Robert, Nicolas Thome, Matthieu Cord
Regularization is a big issue for training deep neural networks.
1 code implementation • 29 Nov 2016 • Michael Blot, David Picard, Matthieu Cord, Nicolas Thome
We address the issue of speeding up the training of convolutional networks.
no code implementations • 25 Oct 2016 • Michael Blot, Matthieu Cord, Nicolas Thome
Convolutional neural networks (CNN) are widely used in computer vision, especially in image classification.