1 code implementation • 20 Mar 2024 • Charles C. Margossian, Loucas Pillaud-Vivien, Lawrence K. Saul
Our analysis covers the KL divergence, the R\'enyi divergences, and a score-based divergence that compares $\nabla\log p$ and $\nabla\log q$.
1 code implementation • 29 Feb 2024 • David Heurtel-Depeiges, Charles C. Margossian, Ruben Ohana, Bruno Régaldo-Saint Blancard
Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the family of noise distributions, and a Monte Carlo sampler to infer the noise parameters.
no code implementations • 22 Feb 2024 • Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles C. Margossian, Robert M. Gower, David M. Blei, Lawrence K. Saul
We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance.
2 code implementations • 20 Jul 2023 • Charles C. Margossian, David M. Blei
We then show, on a broader class of models, how to expand the domain of AVI's inference function to improve its solution, and we provide examples, e. g. hidden Markov models, where the amortization gap cannot be closed.
1 code implementation • 17 Feb 2023 • Charles C. Margossian, Lawrence K. Saul
We study various manifestations of this trade-off, notably one where, as the dimension of the problem grows, the per-component entropy gap between $p$ and $q$ becomes vanishingly small even though $q$ underestimates every componentwise variance by a constant multiplicative factor.
3 code implementations • 12 Nov 2018 • Charles C. Margossian
Derivatives play a critical role in computational statistics, examples being Bayesian inference using Hamiltonian Monte Carlo sampling and the training of neural networks.
Mathematical Software Computation