Search Results for author: Ruben Ohana

Found 15 papers, 11 papers with code

Listening to the Noise: Blind Denoising with Gibbs Diffusion

1 code implementation29 Feb 2024 David Heurtel-Depeiges, Charles C. Margossian, Ruben Ohana, Bruno Régaldo-Saint Blancard

Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the family of noise distributions, and a Monte Carlo sampler to infer the noise parameters.

Bayesian Inference Denoising

Removing Dust from CMB Observations with Diffusion Models

no code implementations25 Oct 2023 David Heurtel-Depeiges, Blakesley Burkhart, Ruben Ohana, Bruno Régaldo-Saint Blancard

We investigate diffusion-based modeling of the dust foreground and its interest for component separation.

MoMo: Momentum Models for Adaptive Learning Rates

1 code implementation12 May 2023 Fabian Schaipp, Ruben Ohana, Michael Eickenberg, Aaron Defazio, Robert M. Gower

MoMo uses momentum estimates of the batch losses and gradients sampled at each iteration to build a model of the loss function.

Recommendation Systems Stochastic Optimization

Shedding a PAC-Bayesian Light on Adaptive Sliced-Wasserstein Distances

1 code implementation7 Jun 2022 Ruben Ohana, Kimia Nadjahi, Alain Rakotomamonjy, Liva Ralaivola

The Sliced-Wasserstein distance (SW) is a computationally efficient and theoretically grounded alternative to the Wasserstein distance.

Generalization Bounds

Complex-to-Real Sketches for Tensor Products with Applications to the Polynomial Kernel

1 code implementation4 Feb 2022 Jonas Wacker, Ruben Ohana, Maurizio Filippone

Commonly used approaches avoid computing the high-dimensional tensor product explicitly, resulting in a suboptimal dependence of $\mathcal{O}(3^p)$ in the embedding dimension.

Photonic Differential Privacy with Direct Feedback Alignment

no code implementations NeurIPS 2021 Ruben Ohana, Hamlet J. Medina Ruiz, Julien Launay, Alessandro Cappelli, Iacopo Poli, Liva Ralaivola, Alain Rakotomamonjy

Optical Processing Units (OPUs) -- low-power photonic chips dedicated to large scale random projections -- have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation.

Align, then memorise: the dynamics of learning with feedback alignment

1 code implementation24 Nov 2020 Maria Refinetti, Stéphane d'Ascoli, Ruben Ohana, Sebastian Goldt

Direct Feedback Alignment (DFA) is emerging as an efficient and biologically plausible alternative to the ubiquitous backpropagation algorithm for training deep neural networks.

Reservoir Computing meets Recurrent Kernels and Structured Transforms

1 code implementation NeurIPS 2020 Jonathan Dong, Ruben Ohana, Mushegh Rafayelyan, Florent Krzakala

Reservoir Computing is a class of simple yet efficient Recurrent Neural Networks where internal weights are fixed at random and only a linear output layer is trained.

Time Series Time Series Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.