Search Results for author: Nicolas Keriven

Found 19 papers, 10 papers with code

Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs

no code implementations21 Apr 2023 Matthieu Cordonnier, Nicolas Keriven, Nicolas Tremblay, Samuel Vaiter

We study the convergence of message passing graph neural networks on random graph models to their continuous counterpart as the number of nodes tends to infinity.

Gradient scarcity with Bilevel Optimization for Graph Learning

1 code implementation24 Mar 2023 Hashem Ghanem, Samuel Vaiter, Nicolas Keriven

To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.

Bilevel Optimization Graph Learning

Stability of Entropic Wasserstein Barycenters and application to random geometric graphs

1 code implementation19 Oct 2022 Marc Theveneau, Nicolas Keriven

We then apply this result to random geometric graphs on manifolds, whose shortest paths converge to geodesics, hence proving the consistency of WBs computed on discretized shapes.

Not too little, not too much: a theoretical analysis of graph (over)smoothing

1 code implementation24 May 2022 Nicolas Keriven

We analyze graph smoothing with \emph{mean aggregation}, where each node successively receives the average of the features of its neighbors.

Entropic Optimal Transport in Random Graphs

1 code implementation11 Jan 2022 Nicolas Keriven

In latent space random graphs, nodes are associated to unknown latent variables.

Supervised learning of analysis-sparsity priors with automatic differentiation

no code implementations15 Dec 2021 Hashem Ghanem, Joseph Salmon, Nicolas Keriven, Samuel Vaiter

In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error.

Denoising Image Reconstruction

On the Universality of Graph Neural Networks on Large Random Graphs

1 code implementation NeurIPS 2021 Nicolas Keriven, Alberto Bietti, Samuel Vaiter

In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their approximation power on random graph models.

Stochastic Block Model

Fast Graph Kernel with Optical Random Features

1 code implementation16 Oct 2020 Hashem Ghanem, Nicolas Keriven, Nicolas Tremblay

If this method can still be prohibitively costly for usual random features, we then incorporate optical random features that can be computed in constant time.

Graph Classification

Sketching Datasets for Large-Scale Learning (long version)

no code implementations4 Aug 2020 Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter

This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.

BIG-bench Machine Learning Clustering +1

Convergence and Stability of Graph Convolutional Networks on Large Random Graphs

1 code implementation NeurIPS 2020 Nicolas Keriven, Alberto Bietti, Samuel Vaiter

We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel.

valid

Statistical Learning Guarantees for Compressive Clustering and Compressive Mixture Modeling

no code implementations17 Apr 2020 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper. The principle of compressive statistical learning is to compress a training collection, in one pass, into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Sparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model

no code implementations7 Feb 2020 Nicolas Keriven, Samuel Vaiter

Existing results show that, in the relatively sparse case where the expected degree grows logarithmically with the number of nodes, guarantees in the static case can be extended to the dynamic case and yield improved error bounds when the DSBM is sufficiently smooth in time, that is, the communities do not change too much between two time steps.

Clustering Stochastic Block Model

Universal Invariant and Equivariant Graph Neural Networks

1 code implementation NeurIPS 2019 Nicolas Keriven, Gabriel Peyré

In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems.

NEWMA: a new method for scalable model-free online change-point detection

1 code implementation21 May 2018 Nicolas Keriven, Damien Garreau, Iacopo Poli

We consider the problem of detecting abrupt changes in the distribution of a multi-dimensional time series, with limited computing power and memory.

Change Point Detection Time Series +1

Instance Optimal Decoding and the Restricted Isometry Property

no code implementations27 Feb 2018 Nicolas Keriven, Rémi Gribonval

In this paper, we address the question of information preservation in ill-posed, non-linear inverse problems, assuming that the measured data is close to a low-dimensional model set.

Compressive Sensing

Blind Source Separation Using Mixtures of Alpha-Stable Distributions

1 code implementation13 Nov 2017 Nicolas Keriven, Antoine Deleforge, Antoine Liutkus

We propose a new blind source separation algorithm based on mixtures of alpha-stable distributions.

blind source separation

Compressive Statistical Learning with Random Feature Moments

no code implementations22 Jun 2017 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We describe a general framework -- compressive statistical learning -- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Compressive K-means

no code implementations27 Oct 2016 Nicolas Keriven, Nicolas Tremblay, Yann Traonmilin, Rémi Gribonval

We demonstrate empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of cen-troids times the ambient dimension, and independent of the size of the original dataset.

Clustering General Classification

Sketching for Large-Scale Learning of Mixture Models

no code implementations9 Jun 2016 Nicolas Keriven, Anthony Bourrier, Rémi Gribonval, Patrick Pérez

We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data.

Compressive Sensing Speaker Verification

Cannot find the paper you are looking for? You can Submit a new open access paper.