1 code implementation • 25 May 2023 • Rémi Bardenet, Michaël Fanuel, Alexandre Feller
Most applications require sampling from a DPP, and given their quantum origin, it is natural to wonder whether sampling a DPP on a quantum computer is easier than on a classical one.
1 code implementation • 31 Aug 2022 • Michaël Fanuel, Rémi Bardenet
In the context of large and dense graphs, we study here sparsifiers of the magnetic Laplacian, i. e., spectral approximations based on subgraphs with few edges.
1 code implementation • 8 Feb 2022 • Barbara Pascal, Rémi Bardenet
Recent work in time-frequency analysis proposed to switch the focus from the maxima of the spectrogram toward its zeros, which, for signals corrupted by Gaussian noise, form a random point pattern with a very stable structure leveraged by modern spatial statistics tools to perform component disentanglement and signal detection.
1 code implementation • NeurIPS 2021 • Michaël Fanuel, Rémi Bardenet
Determinantal Point Process (DPPs) are statistical models for repulsive point patterns.
no code implementations • 9 Nov 2020 • Arnaud Poinas, Rémi Bardenet
Optimal design for linear regression is a fundamental task in statistics.
Computation
no code implementations • 8 Jul 2020 • Rémi Bardenet, Subhroshekhar Ghosh
Our approach is scalable and applies to very general DPPs, beyond traditional symmetric kernels.
no code implementations • ICML 2020 • Ayoub Belhadji, Rémi Bardenet, Pierre Chainais
A fundamental task in kernel methods is to pick nodes and weights, so as to approximate a given function from an RKHS by the weighted sum of kernel translates located at the nodes.
1 code implementation • NeurIPS 2019 • Guillaume Gautier, Rémi Bardenet, Michal Valko
In the absence of DPP machinery to derive an efficient sampler and analyze their estimator, the idea of Monte Carlo integration with DPPs was stored in the cellar of numerical integration.
1 code implementation • NeurIPS 2019 • Ayoub Belhadji, Rémi Bardenet, Pierre Chainais
We study quadrature rules for functions from an RKHS, using nodes sampled from a determinantal point process (DPP).
no code implementations • 23 Dec 2018 • Ayoub Belhadji, Rémi Bardenet, Pierre Chainais
We give bounds on the ratio of the expected approximation error for this DPP over the optimal error of PCA.
2 code implementations • 19 Sep 2018 • Guillaume Gautier, Guillermo Polito, Rémi Bardenet, Michal Valko
Determinantal point processes (DPPs) are specific probability distributions over clouds of points that are used as models and computational tools across physics, probability, statistics, and more recently machine learning.
1 code implementation • 30 Jul 2018 • Rémi Bardenet, Adrien Hardy
Finally, we provide quantitative estimates concerning the finite-dimensional approximations of these white noises, which is of practical interest when it comes to implementing signal processing algorithms based on GAFs.
Probability Classical Analysis and ODEs Methodology
1 code implementation • ICML 2017 • Guillaume Gautier, Rémi Bardenet, Michal Valko
Previous theoretical results yield a fast mixing time of our chain when targeting a distribution that is close to a projection DPP, but not a DPP in general.
1 code implementation • 2 May 2016 • Rémi Bardenet, Adrien Hardy
We show that repulsive random variables can yield Monte Carlo methods with faster convergence rates than the typical $N^{-1/2}$, where $N$ is the number of integrand evaluations.
Probability Classical Analysis and ODEs Computation Methodology
no code implementations • NeurIPS 2015 • Rémi Bardenet, Michalis K. Titsias
DPPs possess desirable properties, such as exact sampling or analyticity of the moments, but learning the parameters of kernel $K$ through likelihood-based inference is not straightforward.
1 code implementation • 11 May 2015 • Rémi Bardenet, Arnaud Doucet, Chris Holmes
Finally, we have only been able so far to propose subsampling-based methods which display good performance in scenarios where the Bernstein-von Mises approximation of the target posterior distribution is excellent.
no code implementations • NeurIPS 2011 • James S. Bergstra, Rémi Bardenet, Yoshua Bengio, Balázs Kégl
Random search has been shown to be sufficiently efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs.