no code implementations • 21 Apr 2023 • Matthieu Cordonnier, Nicolas Keriven, Nicolas Tremblay, Samuel Vaiter
We study the convergence of message passing graph neural networks on random graph models to their continuous counterpart as the number of nodes tends to infinity.
1 code implementation • 24 Mar 2023 • Hashem Ghanem, Samuel Vaiter, Nicolas Keriven
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
1 code implementation • 19 Oct 2022 • Marc Theveneau, Nicolas Keriven
We then apply this result to random geometric graphs on manifolds, whose shortest paths converge to geodesics, hence proving the consistency of WBs computed on discretized shapes.
1 code implementation • 24 May 2022 • Nicolas Keriven
We analyze graph smoothing with \emph{mean aggregation}, where each node successively receives the average of the features of its neighbors.
1 code implementation • 11 Jan 2022 • Nicolas Keriven
In latent space random graphs, nodes are associated to unknown latent variables.
no code implementations • 15 Dec 2021 • Hashem Ghanem, Joseph Salmon, Nicolas Keriven, Samuel Vaiter
In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error.
1 code implementation • NeurIPS 2021 • Nicolas Keriven, Alberto Bietti, Samuel Vaiter
In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their approximation power on random graph models.
1 code implementation • 16 Oct 2020 • Hashem Ghanem, Nicolas Keriven, Nicolas Tremblay
If this method can still be prohibitively costly for usual random features, we then incorporate optical random features that can be computed in constant time.
no code implementations • 4 Aug 2020 • Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.
1 code implementation • NeurIPS 2020 • Nicolas Keriven, Alberto Bietti, Samuel Vaiter
We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel.
no code implementations • 17 Apr 2020 • Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin
We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper. The principle of compressive statistical learning is to compress a training collection, in one pass, into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.
no code implementations • 7 Feb 2020 • Nicolas Keriven, Samuel Vaiter
Existing results show that, in the relatively sparse case where the expected degree grows logarithmically with the number of nodes, guarantees in the static case can be extended to the dynamic case and yield improved error bounds when the DSBM is sufficiently smooth in time, that is, the communities do not change too much between two time steps.
1 code implementation • NeurIPS 2019 • Nicolas Keriven, Gabriel Peyré
In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems.
1 code implementation • 21 May 2018 • Nicolas Keriven, Damien Garreau, Iacopo Poli
We consider the problem of detecting abrupt changes in the distribution of a multi-dimensional time series, with limited computing power and memory.
no code implementations • 27 Feb 2018 • Nicolas Keriven, Rémi Gribonval
In this paper, we address the question of information preservation in ill-posed, non-linear inverse problems, assuming that the measured data is close to a low-dimensional model set.
1 code implementation • 13 Nov 2017 • Nicolas Keriven, Antoine Deleforge, Antoine Liutkus
We propose a new blind source separation algorithm based on mixtures of alpha-stable distributions.
no code implementations • 22 Jun 2017 • Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin
We describe a general framework -- compressive statistical learning -- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.
no code implementations • 27 Oct 2016 • Nicolas Keriven, Nicolas Tremblay, Yann Traonmilin, Rémi Gribonval
We demonstrate empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of cen-troids times the ambient dimension, and independent of the size of the original dataset.
no code implementations • 9 Jun 2016 • Nicolas Keriven, Anthony Bourrier, Rémi Gribonval, Patrick Pérez
We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data.