Search Results for author: Rémi Gribonval

Found 48 papers, 14 papers with code

Sketch and shift: a robust decoder for compressive clustering

no code implementations15 Dec 2023 Ayoub Belhadji, Rémi Gribonval

In this work, we undertake a scrutinized examination of CL-OMPR to circumvent its limitations.

Clustering

Revisiting RIP guarantees for sketching operators on mixture models

no code implementations9 Dec 2023 Ayoub Belhadji, Rémi Gribonval

In the context of sketching for compressive mixture modeling, we revisit existing proofs of the Restricted Isometry Property of sketching operators with respect to certain mixtures models.

Compressive Recovery of Sparse Precision Matrices

1 code implementation8 Nov 2023 Titouan Vayer, Etienne Lasalle, Rémi Gribonval, Paulo Gonçalves

We consider the problem of learning a graph modeling the statistical relations of the $d$ variables from a dataset with $n$ samples $X \in \mathbb{R}^{n \times d}$.

2k

A path-norm toolkit for modern networks: consequences, promises and challenges

1 code implementation2 Oct 2023 Antoine Gonon, Nicolas Brisebarre, Elisa Riccietti, Rémi Gribonval

The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of path-norm-based generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet.

Generalization Bounds

About the Cost of Central Privacy in Density Estimation

no code implementations26 Jun 2023 Clément Lalanne, Aurélien Garivier, Rémi Gribonval

We recover the result of Barber \& Duchi (2014) stating that histogram estimators are optimal against Lipschitz distributions for the L2 risk, and under regular differential privacy, and we extend it to other norms and notions of privacy.

Density Estimation

Does a sparse ReLU network training problem always admit an optimum?

no code implementations5 Jun 2023 Quoc-Tung Le, Elisa Riccietti, Rémi Gribonval

Then, the existence of a global optimum is proved for every concrete optimization problem involving a shallow sparse ReLU neural network of output dimension one.

Network Pruning

Private Statistical Estimation of Many Quantiles

no code implementations14 Feb 2023 Clément Lalanne, Aurélien Garivier, Rémi Gribonval

The first one consists in privately estimating the empirical quantiles of the samples and using this result as an estimator of the quantiles of the distribution.

Density Estimation

On the Statistical Complexity of Estimation and Testing under Privacy Constraints

no code implementations5 Oct 2022 Clément Lalanne, Aurélien Garivier, Rémi Gribonval

In certain scenarios, we show that maintaining privacy results in a noticeable reduction in performance only when the level of privacy protection is very high.

Self-supervised learning with rotation-invariant kernels

1 code implementation28 Jul 2022 Léon Zheng, Gilles Puy, Elisa Riccietti, Patrick Pérez, Rémi Gribonval

We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels on the hypersphere (also known as dot-product kernels) for self-supervised learning of image representations.

Self-Supervised Learning

Compressive Clustering with an Optical Processing Unit

no code implementations13 Jun 2022 Luc Giffon, Rémi Gribonval

We explore the use of Optical Processing Units (OPU) to compute random Fourier features for sketching, and adapt the overall compressive clustering pipeline to this setting.

Clustering

Approximation speed of quantized vs. unquantized ReLU neural networks and beyond

no code implementations24 May 2022 Antoine Gonon, Nicolas Brisebarre, Rémi Gribonval, Elisa Riccietti

This is achieved using a new lower-bound on the Lipschitz constant of the map that associates the parameters of ReLU networks to their realization, and an upper-bound generalizing classical results.

Quantization

Private Quantiles Estimation in the Presence of Atoms

no code implementations15 Feb 2022 Clément Sébastien Lalanne, Clément Gastaud, Nicolas Grislain, Aurélien Garivier, Rémi Gribonval

We consider the differentially private estimation of multiple quantiles (MQ) of a distribution from a dataset, a key building block in modern data analysis.

A theory of optimal convex regularization for low-dimensional recovery

no code implementations7 Dec 2021 Yann Traonmilin, Rémi Gribonval, Samuel Vaiter

To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint.

Controlling Wasserstein Distances by Kernel Norms with Application to Compressive Statistical Learning

no code implementations1 Dec 2021 Titouan Vayer, Rémi Gribonval

Based on the relations between the MMD and the Wasserstein distances, we provide guarantees for compressive statistical learning by introducing and studying the concept of Wasserstein regularity of the learning task, that is when some task-specific metric between probability distributions can be bounded by a Wasserstein distance.

Identifiability in Two-Layer Sparse Matrix Factorization

no code implementations4 Oct 2021 Léon Zheng, Elisa Riccietti, Rémi Gribonval

In particular, in the case of fixed-support sparse matrix factorization, we give a general sufficient condition for identifiability based on rank-one matrix completability, and we derive from it a completion algorithm that can verify if this sufficient condition is satisfied, and recover the entries in the two sparse factors if this is the case.

Vocal Bursts Valence Prediction

Efficient Identification of Butterfly Sparse Matrix Factorizations

1 code implementation4 Oct 2021 Léon Zheng, Elisa Riccietti, Rémi Gribonval

Our main contribution is to prove that any $N \times N$ matrix having the so-called butterfly structure admits an essentially unique factorization into $J$ butterfly factors (where $N = 2^{J}$), and that the factors can be recovered by a hierarchical factorization method, which consists in recursively factorizing the considered matrix into two factors.

Nonsmooth convex optimization to estimate the Covid-19 reproduction number space-time evolution with robustness against low quality data

no code implementations20 Sep 2021 Barbara Pascal, Patrice Abry, Nelly Pustelnik, Stéphane G. Roux, Rémi Gribonval, Patrick Flandrin

The present work aims to overcome these limitations by carefully crafting a functional permitting to estimate jointly, in a single step, the reproduction number and outliers defined to model low quality data.

Epidemiology

An Embedding of ReLU Networks and an Analysis of their Identifiability

no code implementations20 Jul 2021 Pierre Stock, Rémi Gribonval

The overall objective of this paper is to introduce an embedding for ReLU neural networks of any depth, $\Phi(\theta)$, that is invariant to scalings and that provides a locally linear parameterization of the realization of the network.

Fast Multiscale Diffusion on Graphs

1 code implementation29 Apr 2021 Sibylle Marcotte, Amélie Barbe, Rémi Gribonval, Titouan Vayer, Marc Sebban, Pierre Borgnat, Paulo Gonçalves

Diffusing a graph signal at multiple scales requires computing the action of the exponential of several multiples of the Laplacian matrix.

Minibatch optimal transport distances; analysis and applications

2 code implementations5 Jan 2021 Kilian Fatras, Younes Zine, Szymon Majewski, Rémi Flamary, Rémi Gribonval, Nicolas Courty

We notably argue that the minibatch strategy comes with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with limits: the minibatch OT is not a distance.

Sketching Datasets for Large-Scale Learning (long version)

no code implementations4 Aug 2020 Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter

This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.

BIG-bench Machine Learning Clustering +1

Statistical Learning Guarantees for Compressive Clustering and Compressive Mixture Modeling

no code implementations17 Apr 2020 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper. The principle of compressive statistical learning is to compress a training collection, in one pass, into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Fast Optical System Identification by Numerical Interferometry

1 code implementation4 Nov 2019 Sidharth Gupta, Rémi Gribonval, Laurent Daudet, Ivan Dokmanić

Our method simplifies the calibration of optical transmission matrices from a quadratic to a linear inverse problem by first recovering the phase of the measurements.

Retrieval

Learning with minibatch Wasserstein : asymptotic and gradient properties

3 code implementations9 Oct 2019 Kilian Fatras, Younes Zine, Rémi Flamary, Rémi Gribonval, Nicolas Courty

Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning.

Don't take it lightly: Phasing optical random projections with unknown operators

1 code implementation NeurIPS 2019 Sidharth Gupta, Rémi Gribonval, Laurent Daudet, Ivan Dokmanić

A signal of interest $\mathbf{\xi} \in \mathbb{R}^N$ is mixed by a random scattering medium to compute the projection $\mathbf{y} = \mathbf{A} \mathbf{\xi}$, with $\mathbf{A} \in \mathbb{C}^{M \times N}$ being a realization of a standard complex Gaussian iid random matrix.

Quantization Retrieval

Stable safe screening and structured dictionaries for faster L1 regularization

1 code implementation17 Dec 2018 Cassio Fraga Dantas, Rémi Gribonval

In this paper, we propose a way to combine two acceleration techniques for the $\ell\_{1}$-regularized least squares problem: safe screening tests, which allow to eliminate useless dictionary atoms; and the use of fast structured approximations of the dictionary matrix.

MULAN: A Blind and Off-Grid Method for Multichannel Echo Retrieval

1 code implementation NeurIPS 2018 Helena Peic Tukuljac, Antoine Deleforge, Rémi Gribonval

The approach operates directly in the parameter-space of echo locations and weights, and enables near-exact blind and off-grid echo retrieval from discrete-time measurements.

Information Retrieval Retrieval

Instance Optimal Decoding and the Restricted Isometry Property

no code implementations27 Feb 2018 Nicolas Keriven, Rémi Gribonval

In this paper, we address the question of information preservation in ill-posed, non-linear inverse problems, assuming that the measured data is close to a low-dimensional model set.

Compressive Sensing

Learning a Complete Image Indexing Pipeline

no code implementations CVPR 2018 Himalaya Jain, Joaquin Zepeda, Patrick Pérez, Rémi Gribonval

To work at scale, a complete image indexing system comprises two components: An inverted file index to restrict the actual search to only a subset that should contain most of the items relevant to the query; An approximate distance computation mechanism to rapidly scan these lists.

Clustering

Compressive Statistical Learning with Random Feature Moments

no code implementations22 Jun 2017 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We describe a general framework -- compressive statistical learning -- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Compressive K-means

no code implementations27 Oct 2016 Nicolas Keriven, Nicolas Tremblay, Yann Traonmilin, Rémi Gribonval

We demonstrate empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of cen-troids times the ambient dimension, and independent of the size of the original dataset.

Clustering General Classification

Approximate search with quantized sparse representations

no code implementations10 Aug 2016 Himalaya Jain, Patrick Pérez, Rémi Gribonval, Joaquin Zepeda, Hervé Jégou

This paper tackles the task of storing a large collection of vectors, such as visual descriptors, and of searching in it.

Quantization

Sketching for Large-Scale Learning of Mixture Models

no code implementations9 Jun 2016 Nicolas Keriven, Anthony Bourrier, Rémi Gribonval, Patrick Pérez

We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data.

Compressive Sensing Speaker Verification

Random sampling of bandlimited signals on graphs

no code implementations16 Nov 2015 Gilles Puy, Nicolas Tremblay, Rémi Gribonval, Pierre Vandergheynst

On the contrary, the second strategy is adaptive but yields optimal results.

Flexible Multi-layer Sparse Approximations of Matrices and Applications

no code implementations24 Jun 2015 Luc Le Magoarou, Rémi Gribonval

The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors.

BIG-bench Machine Learning Dictionary Learning +1

Learning Co-Sparse Analysis Operators with Separable Structures

no code implementations9 Mar 2015 Matthias Seibert, Julian Wörmann, Rémi Gribonval, Martin Kleinsteuber

In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure.

Dynamic Screening: Accelerating First-Order Algorithms for the Lasso and Group-Lasso

no code implementations12 Dec 2014 Antoine Bonnefoy, Valentin Emiya, Liva Ralaivola, Rémi Gribonval

Recent computational strategies based on screening tests have been proposed to accelerate algorithms addressing penalized sparse regression problems such as the Lasso.

regression

Sparse and spurious: dictionary learning with noise and outliers

no code implementations19 Jul 2014 Rémi Gribonval, Rodolphe Jenatton, Francis Bach

A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary.

Dictionary Learning

Learning computationally efficient dictionaries and their implementation as fast transforms

no code implementations20 Jun 2014 Luc Le Magoarou, Rémi Gribonval

The resulting dictionary is in general a dense matrix, and its manipulation can be computationally costly both at the learning stage and later in the usage of this dictionary, for tasks such as sparse coding.

Dictionary Learning

Separable Cosparse Analysis Operator Learning

no code implementations6 Jun 2014 Matthias Seibert, Julian Wörmann, Rémi Gribonval, Martin Kleinsteuber

The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields.

Operator learning

On The Sample Complexity of Sparse Dictionary Learning

no code implementations20 Mar 2014 Matthias Seibert, Martin Kleinsteuber, Rémi Gribonval, Rodolphe Jenatton, Francis Bach

The main goal of this paper is to provide a sample complexity estimate that controls to what extent the empirical average deviates from the cost function.

Dictionary Learning

Balancing Sparsity and Rank Constraints in Quadratic Basis Pursuit

no code implementations17 Mar 2014 Cagdas Bilen, Gilles Puy, Rémi Gribonval, Laurent Daudet

We investigate the methods that simultaneously enforce sparsity and low-rank structure in a matrix as often employed for sparse phase retrieval problems or phase calibration problems in compressive sensing.

Compressive Sensing Retrieval

Sample Complexity of Dictionary Learning and other Matrix Factorizations

no code implementations13 Dec 2013 Rémi Gribonval, Rodolphe Jenatton, Francis Bach, Martin Kleinsteuber, Matthias Seibert

Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis (PCA), non-negative matrix factorization (NMF), $K$-means clustering, etc., rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection.

Clustering Dictionary Learning +1

Wavelets on Graphs via Spectral Graph Theory

1 code implementation19 Dec 2009 David K Hammond, Pierre Vandergheynst, Rémi Gribonval

We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph.

Functional Analysis Information Theory Information Theory 42C40; 65T90

Cannot find the paper you are looking for? You can Submit a new open access paper.