no code implementations • ICML 2018 • Thomas Laurent, James Von Brecht
By appealing to harmonic analysis we show that all local minima of such network are non-differentiable, except for those minima that occur in a region of parameter space where the loss surface is perfectly flat.
no code implementations • 5 Dec 2017 • Thomas Laurent, James Von Brecht
We consider deep linear networks with arbitrary convex differentiable loss.
no code implementations • 19 Dec 2016 • Thomas Laurent, James Von Brecht
We introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task.
1 code implementation • NeurIPS 2016 • Thomas Laurent, James Von Brecht, Xavier Bresson, Arthur Szlam
We introduce a theoretical and algorithmic framework for multi-way graph partitioning that relies on a multiplicative cut-based objective.
no code implementations • 19 Jun 2015 • Xavier Bresson, Thomas Laurent, James Von Brecht
This work aims at recovering signals that are sparse on graphs.
no code implementations • 24 Nov 2014 • Nicolas Garcia Trillos, Dejan Slepcev, James Von Brecht, Thomas Laurent, Xavier Bresson
We consider point clouds obtained as samples of a ground-truth measure.
no code implementations • 15 Jun 2014 • Xavier Bresson, Huiyi Hu, Thomas Laurent, Arthur Szlam, James Von Brecht
In this work we propose a simple and easily parallelizable algorithm for multiway graph partitioning.