2 code implementations • 23 Mar 2018 • Nicolas Tremblay, Simon Barthelmé, Pierre-Olivier Amblard
We apply our results to both the k-means and the linear regression problems, and give extensive empirical evidence that the small additional computational cost of DPP sampling comes with superior performance over its iid counterpart.
1 code implementation • 31 Oct 2022 • Simon Barthelmé, Nicolas Tremblay, Pierre-Olivier Amblard
Finally, an interesting by-product of the analysis is that a realisation from a DPP is typically contained in a subset of size O(m log m) formed using leverage score i. i. d.
no code implementations • 5 Mar 2018 • Simon Barthelmé, Pierre-Olivier Amblard, Nicolas Tremblay
In this work we show that as the size of the ground set grows, $k$-DPPs and DPPs become equivalent, meaning that their inclusion probabilities converge.
no code implementations • 5 Mar 2017 • Nicolas Tremblay, Pierre-Olivier Amblard, Simon Barthelmé
For large graphs, ie, in cases where the graph's spectrum is not accessible, we investigate, both theoretically and empirically, a sub-optimal but much faster DPP based on loop-erased random walks on the graph.
no code implementations • 11 Jun 2014 • Simon Barthelmé, Nicolas Chopin
Here we show that inferring the parameters of a unnormalised model on a space $\Omega$ can be mapped onto an equivalent problem of estimating the intensity of a Poisson point process on $\Omega$.
no code implementations • 15 Oct 2021 • Yusuf Pilavci, Pierre-Olivier Amblard, Simon Barthelmé, Nicolas Tremblay
Large dimensional least-squares and regularised least-squares problems are expensive to solve.