Search Results for author: Michael Eickenberg

Found 35 papers, 17 papers with code

PETRA: Parallel End-to-end Training with Reversible Architectures

no code implementations4 Jun 2024 Stéphane Rivaud, Louis Fournier, Thomas Pumir, Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Reversible architectures have been shown to be capable of performing on par with their non-reversible architectures, being applied in deep learning for memory savings and generative modeling.

From Feature Visualization to Visual Circuits: Effect of Adversarial Model Manipulation

no code implementations3 Jun 2024 Geraldin Nanfack, Michael Eickenberg, Eugene Belilovsky

Understanding the inner working functionality of large-scale deep neural networks is challenging yet crucial in several high-stakes applications.

Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation

no code implementations7 Feb 2024 Pedro Vianna, Muawiz Chaudhary, Paria Mehrbod, An Tang, Guy Cloutier, Guy Wolf, Michael Eickenberg, Eugene Belilovsky

However, in many practical applications this technique is vulnerable to label distribution shifts, sometimes producing catastrophic failure.

Test-time Adaptation

SimBIG: Field-level Simulation-Based Inference of Galaxy Clustering

no code implementations23 Oct 2023 Pablo Lemos, Liam Parker, ChangHoon Hahn, Shirley Ho, Michael Eickenberg, Jiamin Hou, Elena Massara, Chirag Modi, Azadeh Moradinezhad Dizgah, Bruno Regaldo-Saint Blancard, David Spergel

We demonstrate the robustness of our analysis by showcasing our ability to infer unbiased cosmological constraints from a series of test simulations that are constructed using different forward models than the one used in our training dataset.

Clustering Data Compression

Learnable wavelet neural networks for cosmological inference

1 code implementation24 Jul 2023 Christian Pedersen, Michael Eickenberg, Shirley Ho

Convolutional neural networks (CNNs) have been shown to both extract more information than the traditional two-point statistics from cosmological fields, and marginalise over astrophysical effects extremely well.

Statistical Component Separation for Targeted Signal Recovery in Noisy Mixtures

1 code implementation26 Jun 2023 Bruno Régaldo-Saint Blancard, Michael Eickenberg

In the case of 1), we show that our method better recovers the descriptors of the target data than a standard denoising method in most situations.

Image Denoising

Can Forward Gradient Match Backpropagation?

1 code implementation12 Jun 2023 Louis Fournier, Stéphane Rivaud, Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Forward Gradients - the idea of using directional derivatives in forward differentiation mode - have recently been shown to be utilizable for neural network training while avoiding problems generally associated with backpropagation gradient computation, such as locking and memorization requirements.


MoMo: Momentum Models for Adaptive Learning Rates

1 code implementation12 May 2023 Fabian Schaipp, Ruben Ohana, Michael Eickenberg, Aaron Defazio, Robert M. Gower

MoMo uses momentum estimates of the losses and gradients sampled at each iteration to build a model of the loss function.

Recommendation Systems Stochastic Optimization

Local Learning with Neuron Groups

1 code implementation18 Jan 2023 Adeetya Patel, Michael Eickenberg, Eugene Belilovsky

Local learning is an approach to model-parallelism that removes the standard end-to-end learning setup and utilizes local objective functions to permit parallel learning amongst model components in a deep network.

Robust Simulation-Based Inference in Cosmology with Bayesian Neural Networks

1 code implementation18 Jul 2022 Pablo Lemos, Miles Cranmer, Muntazir Abidi, ChangHoon Hahn, Michael Eickenberg, Elena Massara, David Yallup, Shirley Ho

Simulation-based inference (SBI) is rapidly establishing itself as a standard machine learning technique for analyzing data in cosmological surveys.

Density Estimation

Exploring the Optimality of Tight-Frame Scattering Networks

no code implementations29 Sep 2021 Shanel Gauthier, Benjamin Thérien, Laurent Alsène-Racicot, Muawiz Sajjad Chaudhary, Irina Rish, Eugene Belilovsky, Michael Eickenberg, Guy Wolf

The wavelet filters used in the scattering transform are typically selected to create a tight frame via a parameterized mother wavelet.

Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous Distributed Learning

no code implementations11 Jun 2021 Eugene Belilovsky, Louis Leconte, Lucas Caccia, Michael Eickenberg, Edouard Oyallon

With the use of a replay buffer we show that this approach can be extended to asynchronous settings, where modules can operate and continue to update with possibly large communication delays.

Image Classification Quantization

Practical Phase Retrieval: Low-Photon Holography with Untrained Priors

no code implementations1 Jan 2021 Hannah Lawrence, David Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié

To the best of our knowledge, this is the first work to consider a dataset-free machine learning approach for holographic phase retrieval.


Phase Retrieval with Holography and Untrained Priors: Tackling the Challenges of Low-Photon Nanoscale Imaging

1 code implementation14 Dec 2020 Hannah Lawrence, David A. Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié

Phase retrieval is the inverse problem of recovering a signal from magnitude-only Fourier measurements, and underlies numerous imaging modalities, such as Coherent Diffraction Imaging (CDI).


Decoupled Greedy Learning of CNNs

2 code implementations ICML 2020 Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

It is based on a greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification.

Image Classification

Greedy Layerwise Learning Can Scale to ImageNet

1 code implementation29 Dec 2018 Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks.

Image Classification

Shallow Learning For Deep Networks

no code implementations27 Sep 2018 Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks.

Solid Harmonic Wavelet Scattering for Predictions of Molecule Properties

no code implementations1 May 2018 Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, Stéphane Mallat, Louis Thiry

We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory.

BIG-bench Machine Learning

Solid Harmonic Wavelet Scattering: Predicting Quantum Molecular Energy from Invariant Descriptors of 3D Electronic Densities

no code implementations NeurIPS 2017 Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, Stephane Mallat

We introduce a solid harmonic wavelet scattering representation, invariant to rigid motion and stable to deformations, for regression and classification of 2D and 3D signals.

General Classification regression

FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging

no code implementations22 Dec 2015 Gaël Varoquaux, Michael Eickenberg, Elvis Dohmatob, Bertand Thirion

The total variation (TV) penalty, as many other analysis-sparsity problems, does not lead to separable factors or a proximal operatorwith a closed-form expression, such as soft thresholding for the $\ell\_1$ penalty.

Brain Decoding

Data-driven HRF estimation for encoding and decoding models

no code implementations27 Feb 2014 Fabian Pedregosa, Michael Eickenberg, Philippe Ciuciu, Bertrand Thirion, Alexandre Gramfort

We develop a method for the joint estimation of activation and HRF using a rank constraint causing the estimated HRF to be equal across events/conditions, yet permitting it to be different across voxels.

Computational Efficiency

Second order scattering descriptors predict fMRI activity due to visual textures

no code implementations10 Aug 2013 Michael Eickenberg, Fabian Pedregosa, Senoussi Mehdi, Alexandre Gramfort, Bertrand Thirion

Second layer scattering descriptors are known to provide good classification performance on natural quasi-stationary processes such as visual textures due to their sensitivity to higher order moments and continuity with respect to small deformations.

General Classification

HRF estimation improves sensitivity of fMRI encoding and decoding models

no code implementations13 May 2013 Fabian Pedregosa, Michael Eickenberg, Bertrand Thirion, Alexandre Gramfort

Extracting activation patterns from functional Magnetic Resonance Images (fMRI) datasets remains challenging in rapid-event designs due to the inherent delay of blood oxygen level-dependent (BOLD) signal.

Cannot find the paper you are looking for? You can Submit a new open access paper.