1 code implementation • 25 May 2017 • Jean Lafond, Nicolas Vasilache, Léon Bottou
We define a second-order neural network stochastic gradient training algorithm whose block-diagonal structure effectively amounts to normalizing the unit activations.
no code implementations • 5 Dec 2016 • Hoi-To Wai, Jean Lafond, Anna Scaglione, Eric Moulines
The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an inexact FW algorithm.
no code implementations • 5 Oct 2015 • Jean Lafond, Hoi-To Wai, Eric Moulines
With a strongly convex stochastic cost and when the optimal solution lies in the interior of the constraint set or the constraint set is a polytope, the regret bound and anytime optimality are shown to be ${\cal O}( \log^3 T / T )$ and ${\cal O}( \log^2 T / T)$, respectively, where $T$ is the number of rounds played.
no code implementations • 24 Feb 2015 • Jean Lafond
We first consider an estimator defined as the minimizer of the sum of a log-likelihood term and a nuclear norm penalization and prove an upper bound on the Frobenius prediction risk.
no code implementations • NeurIPS 2014 • Jean Lafond, Olga Klopp, Eric Moulines, Jospeh Salmon
The task of reconstructing a matrix given a sample of observedentries is known as the matrix completion problem.
no code implementations • 26 Aug 2014 • Olga Klopp, Jean Lafond, Eric Moulines, Joseph Salmon
The task of estimating a matrix given a sample of observed entries is known as the \emph{matrix completion problem}.