no code implementations • 14 Mar 2024 • Filippo Ascolani, Gareth O. Roberts, Giacomo Zanella
This allows us to study the performances of popular Metropolis-within-Gibbs schemes for non-conjugate hierarchical models, in high-dimensional regimes where both number of datapoints and parameters increase.
2 code implementations • 20 Dec 2023 • Max Goplerud, Omiros Papaspiliopoulos, Giacomo Zanella
We also provide generic results, which are of independent interest, relating the accuracy of variational inference to the convergence rate of the corresponding coordinate ascent variational inference (CAVI) algorithm for Gaussian targets.
no code implementations • 14 Apr 2023 • Filippo Ascolani, Giacomo Zanella
Gibbs samplers are popular algorithms to approximate posterior distributions arising from Bayesian hierarchical models.
no code implementations • 21 Nov 2022 • Philippe Gagnon, Florian Maire, Giacomo Zanella
We show both theoretically and empirically that this weight function induces pathological behaviours in high dimensions, especially during the convergence phase.
1 code implementation • 19 Sep 2022 • Luca Silva, Giacomo Zanella
Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive accuracy.
no code implementations • 25 May 2022 • Filippo Ascolani, Antonio Lijoi, Giovanni Rebaudo, Giacomo Zanella
Dirichlet process mixtures are flexible non-parametric models, particularly suited to density estimation and probabilistic clustering.
no code implementations • 4 Jan 2022 • Jure Vogrinc, Samuel Livingstone, Giacomo Zanella
We derive an optimal choice of noise distribution for the Barker proposal, optimal choice of balancing function under a Gaussian noise distribution, and optimal choice of first-order locally-balanced algorithm among the entire class, which turns out to depend on the specific target distribution.
no code implementations • 17 Dec 2020 • Max Hird, Samuel Livingstone, Giacomo Zanella
We provide a full derivation of the method from first principles, placing it within a wider class of continuous-time Markov jump processes.
Computation Methodology
no code implementations • 4 Apr 2020 • Brenda Betancourt, Giacomo Zanella, Rebecca C. Steorts
Motivated by these issues, we propose a general class of random partition models that satisfy the microclustering property with well-characterized theoretical properties.
Methodology Statistics Theory Statistics Theory
2 code implementations • 15 Nov 2019 • Augusto Fasano, Daniele Durante, Giacomo Zanella
Modern methods for Bayesian regression beyond the Gaussian response setting are often computationally impractical or inaccurate in high dimensions.
Methodology Computation
1 code implementation • 1 May 2018 • Giacomo Zanella, Gareth Roberts
We propose a Monte Carlo algorithm to sample from high dimensional probability distributions that combines Markov chain Monte Carlo and importance sampling.
no code implementations • 26 Mar 2018 • Omiros Papaspiliopoulos, Gareth O. Roberts, Giacomo Zanella
We analyze the complexity of Gibbs samplers for inference in crossed random effect models used in modern analysis of variance.
1 code implementation • 20 Nov 2017 • Giacomo Zanella
There is a lack of methodological results to design efficient Markov chain Monte Carlo (MCMC) algorithms for statistical models with discrete-valued high-dimensional parameters.
Computation Probability
1 code implementation • 4 Sep 2017 • Anthony Lee, Simone Tiberi, Giacomo Zanella
This is wasteful and typically requires the number of particles to grow quadratically with the number of expectations.
Computation
no code implementations • NeurIPS 2016 • Giacomo Zanella, Brenda Betancourt, Hanna Wallach, Jeffrey Miller, Abbas Zaidi, Rebecca C. Steorts
Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points.