Search Results for author: Edouard Oyallon

Found 33 papers, 19 papers with code

Generic Deep Networks with Wavelet Scattering

1 code implementation20 Dec 2013 Edouard Oyallon, Stéphane Mallat, Laurent SIfre

We introduce a two-layer wavelet scattering network, for object classification.

General Classification

Deep Roto-Translation Scattering for Object Classification

1 code implementation CVPR 2015 Edouard Oyallon, Stéphane Mallat

Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT.

Classification Dictionary Learning +4

Building a Regular Decision Boundary with Deep Networks

1 code implementation CVPR 2017 Edouard Oyallon

We show that increasing the width of our network permits being competitive with very deep networks.

Multiscale Hierarchical Convolutional Networks

no code implementations12 Mar 2017 Jörn-Henrik Jacobsen, Edouard Oyallon, Stéphane Mallat, Arnold W. M. Smeulders

Multiscale hierarchical convolutional networks are structured deep convolutional networks where layers are indexed by progressively higher dimensional attributes, which are learned from training data.

Attribute

Scaling the Scattering Transform: Deep Hybrid Networks

2 code implementations ICCV 2017 Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko

Combining scattering networks with a modern ResNet, we achieve a single-crop top 5 error of 11. 4% on imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing only 10 layers.

Image Classification

i-RevNet: Deep Invertible Networks

2 code implementations ICLR 2018 Jörn-Henrik Jacobsen, Arnold Smeulders, Edouard Oyallon

An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth.

Online Regularized Nonlinear Acceleration

no code implementations24 May 2018 Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach

Regularized nonlinear acceleration (RNA) estimates the minimum of a function by post-processing iterates from an algorithm such as the gradient method.

General Classification

Nonlinear Acceleration of CNNs

1 code implementation1 Jun 2018 Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach

The Regularized Nonlinear Acceleration (RNA) algorithm is an acceleration method capable of improving the rate of convergence of many optimization schemes such as gradient descend, SAGA or SVRG.

Scattering Networks for Hybrid Representation Learning

1 code implementation17 Sep 2018 Edouard Oyallon, Sergey Zagoruyko, Gabriel Huang, Nikos Komodakis, Simon Lacoste-Julien, Matthew Blaschko, Eugene Belilovsky

In particular, by working in scattering space, we achieve competitive results both for supervised and unsupervised learning tasks, while making progress towards constructing more interpretable CNNs.

Representation Learning

Compressing the Input for CNNs with the First-Order Scattering Transform

1 code implementation ECCV 2018 Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, Michal Valko

We study the first-order scattering transform as a candidate for reducing the signal processed by a convolutional neural network (CNN).

General Classification Translation

Shallow Learning For Deep Networks

no code implementations27 Sep 2018 Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks.

On Lazy Training in Differentiable Programming

1 code implementation NeurIPS 2019 Lenaic Chizat, Edouard Oyallon, Francis Bach

In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying.

Greedy Layerwise Learning Can Scale to ImageNet

1 code implementation29 Dec 2018 Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks.

Image Classification

Decoupled Greedy Learning of CNNs

2 code implementations ICML 2020 Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

It is based on a greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification.

Image Classification

Interferometric Graph Transform: a Deep Unsupervised Graph Representation

1 code implementation ICML 2020 Edouard Oyallon

We propose the Interferometric Graph Transform (IGT), which is a new class of deep unsupervised graph convolutional neural network for building graph representations.

Action Recognition Community Detection +1

A spectral perspective on GCNs

no code implementations1 Jan 2021 Nathan Grinsztajn, Philippe Preux, Edouard Oyallon

In this work, we study the behavior of standard GCNs under spectral manipulations.

The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods.

no code implementations ICLR 2021 Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon

A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis.

Object Recognition Representation Learning

The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods

1 code implementation19 Jan 2021 Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon

A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis.

Object Recognition Representation Learning

Interferometric Graph Transform for Community Labeling

no code implementations4 Jun 2021 Nathan Grinsztajn, Louis Leconte, Philippe Preux, Edouard Oyallon

We present a new approach for learning unsupervised node representations in community graphs.

Low-Rank Projections of GCNs Laplacian

no code implementations ICLR Workshop GTRL 2021 Nathan Grinsztajn, Philippe Preux, Edouard Oyallon

In this work, we study the behavior of standard models for community detection under spectral manipulations.

Community Detection

Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous Distributed Learning

no code implementations11 Jun 2021 Eugene Belilovsky, Louis Leconte, Lucas Caccia, Michael Eickenberg, Edouard Oyallon

With the use of a replay buffer we show that this approach can be extended to asynchronous settings, where modules can operate and continue to update with possibly large communication delays.

Image Classification Quantization

Gradient Masked Averaging for Federated Learning

no code implementations28 Jan 2022 Irene Tenison, Sai Aravind Sreeramadas, Vaikkunth Mugunthan, Edouard Oyallon, Irina Rish, Eugene Belilovsky

A major challenge in federated learning is the heterogeneity of data across client, which can degrade the performance of standard FL algorithms.

Federated Learning Out-of-Distribution Generalization

On Non-Linear operators for Geometric Deep Learning

no code implementations6 Jul 2022 Grégoire Sergeant-Perthuis, Jakob Maier, Joan Bruna, Edouard Oyallon

In the context of Neural Networks defined over $\mathcal{M}$, it indicates that point-wise non-linear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries.

Why do tree-based models still outperform deep learning on tabular data?

1 code implementation18 Jul 2022 Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux

While deep learning has enabled tremendous progress on text and image datasets, its superiority on tabular data is not clear.

Benchmarking

DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization

1 code implementation26 Jul 2022 Adel Nabli, Edouard Oyallon

This work introduces DADAO: the first decentralized, accelerated, asynchronous, primal, first-order algorithm to minimize a sum of $L$-smooth and $\mu$-strongly convex functions distributed over a given network of size $n$.

Point Processes

Why do tree-based models still outperform deep learning on typical tabular data?

1 code implementation NeurIPS 2022 Leo Grinsztajn, Edouard Oyallon, Gael Varoquaux

While deep learning has enabled tremendous progress on text and image datasets, its superiority on tabular data is not clear.

Benchmarking

Can Forward Gradient Match Backpropagation?

1 code implementation12 Jun 2023 Louis Fournier, Stéphane Rivaud, Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Forward Gradients - the idea of using directional derivatives in forward differentiation mode - have recently been shown to be utilizable for neural network training while avoiding problems generally associated with backpropagation gradient computation, such as locking and memorization requirements.

Memorization

Vectorizing string entries for data processing on tables: when are larger language models better?

no code implementations15 Dec 2023 Léo Grinsztajn, Edouard Oyallon, Myung Jun Kim, Gaël Varoquaux

We study the benefits of language models in 14 analytical tasks on tables while varying the training size, as well as for a fuzzy join benchmark.

Cyclic Data Parallelism for Efficient Parallelism of Deep Neural Networks

no code implementations13 Mar 2024 Louis Fournier, Edouard Oyallon

Training large deep learning models requires parallelization techniques to scale.

Cannot find the paper you are looking for? You can Submit a new open access paper.