no code implementations • NeurIPS 2018 • Rui Li, Kishan Kc, Feng Cui, Justin Domke, Anne Haake
This paper studies statistical relationships among components of high-dimensional observations varying across non-random covariates.
no code implementations • 11 Dec 2024 • Abhinav Agrawal, Justin Domke
Normalizing flow-based variational inference (flow VI) is a promising approximate inference approach, but its performance remains inconsistent across studies.
1 code implementation • 31 Oct 2024 • Jinlin Lai, Justin Domke, Daniel Sheldon
A naive approach introduces cubic time operations within an inference algorithm like HMC, but we reduce the running time to linear using fast linear algebra techniques.
no code implementations • 30 May 2024 • Abhinav Agrawal, Justin Domke
Predictive posterior densities (PPDs) are of interest in approximate Bayesian inference.
no code implementations • 25 Oct 2023 • Yuling Yao, Bruno Régaldo-Saint Blancard, Justin Domke
Simulation-based inference has been popular for amortized Bayesian computation.
no code implementations • NeurIPS 2023 • Justin Domke, Guillaume Garrigos, Robert Gower
Black-box variational inference is widely used in situations where there is no proof that its stochastic optimization succeeds.
1 code implementation • NeurIPS 2023 • Yuling Yao, Justin Domke
To check the accuracy of Bayesian computations, it is common to use rank-based simulation-based calibration (SBC).
no code implementations • 13 Apr 2023 • Javier Burroni, Justin Domke, Daniel Sheldon
We present a novel approach for black-box VI that bypasses the difficulties of stochastic gradient ascent, including the task of selecting step-sizes.
no code implementations • pproximateinference AABI Symposium 2022 • Javier Burroni, Kenta Takatsu, Justin Domke, Daniel Sheldon
We propose the use of U-statistics to reduce variance for gradient estimation in importance-weighted variational inference.
1 code implementation • 13 Oct 2022 • Xi Wang, Tomas Geffner, Justin Domke
Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance.
no code implementations • 16 Aug 2022 • Tomas Geffner, Justin Domke
In fact, using our formulation we propose a new method that combines the strengths of previously existing algorithms; it uses underdamped Langevin transitions and powerful augmentations parameterized by a score network.
no code implementations • 8 Mar 2022 • Tomas Geffner, Justin Domke
Hierarchical models represent a challenging setting for inference algorithms.
no code implementations • NeurIPS 2021 • Abhinav Agrawal, Justin Domke
It is difficult to use subsampling with variational inference in hierarchical models since the number of local latent variables scales with the dataset.
1 code implementation • 30 Sep 2021 • Jinlin Lai, Justin Domke, Daniel Sheldon
We reveal that the marginal particle filter is obtained from sequential Monte Carlo by applying Rao-Blackwellization operations, which sacrifices the trajectory information for reduced variance and differentiability.
no code implementations • NeurIPS 2021 • Tomas Geffner, Justin Domke
Given an unnormalized target distribution we want to obtain approximate samples from it and a tight lower bound on its (log) normalization constant log Z. Annealed Importance Sampling (AIS) with Hamiltonian MCMC is a powerful method that can be used to do this.
no code implementations • pproximateinference AABI Symposium 2021 • Tomas Geffner, Justin Domke
In this paper we empirically evaluate biased methods for alpha-divergence minimization.
no code implementations • 25 Feb 2021 • Justin Domke
It is important to estimate the errors of probabilistic inference algorithms.
no code implementations • 19 Oct 2020 • Tomas Geffner, Justin Domke
In this work we study unbiased methods for alpha-divergence minimization through the Signal-to-Noise Ratio (SNR) of the gradient estimator.
1 code implementation • NeurIPS 2020 • Tomas Geffner, Justin Domke
Flexible variational distributions improve variational inference but are harder to optimize.
no code implementations • NeurIPS 2020 • Abhinav Agrawal, Daniel Sheldon, Justin Domke
The combination of these algorithmic components significantly advances the state-of-the-art "out of the box" variational inference.
no code implementations • 7 Jan 2020 • Justin Domke
Maximum likelihood learning with exponential families leads to moment-matching of the sufficient statistics, a classic result.
no code implementations • NeurIPS 2019 • My Phan, Yasin Abbasi Yadkori, Justin Domke
We study the effects of approximate inference on the performance of Thompson sampling in the $k$-armed bandit problems.
no code implementations • 5 Nov 2019 • Tomas Geffner, Justin Domke
Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.
no code implementations • pproximateinference AABI Symposium 2019 • Tomas Geffner, Justin Domke
Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.
no code implementations • NeurIPS 2019 • My Phan, Yasin Abbasi-Yadkori, Justin Domke
We study the effects of approximate inference on the performance of Thompson sampling in the $k$-armed bandit problems.
no code implementations • NeurIPS 2019 • Justin Domke, Daniel Sheldon
Recent work in variational inference (VI) uses ideas from Monte Carlo estimation to tighten the lower bounds on the log-likelihood that are used as objectives.
no code implementations • NeurIPS 2019 • Justin Domke
Recent variational inference methods use stochastic gradient estimators whose variance is not well understood.
no code implementations • ICML 2020 • Justin Domke
Black-box variational inference tries to approximate a complex target distribution though a gradient-based optimization of the parameters of a simpler distribution.
no code implementations • NeurIPS 2018 • Tomas Geffner, Justin Domke
Variational inference is increasingly being addressed with stochastic optimization.
no code implementations • NeurIPS 2018 • Justin Domke, Daniel Sheldon
Recent work used importance sampling ideas for better variational bounds on likelihoods.
1 code implementation • ICLR 2019 • Ga Wu, Justin Domke, Scott Sanner
Variational Autoencoders (VAEs) are a popular generative model, but one in which conditional inference can be challenging.
no code implementations • ICML 2017 • Justin Domke
Two popular classes of methods for approximate inference are Markov chain Monte Carlo (MCMC) and variational inference.
no code implementations • NeurIPS 2015 • Hadi Mohasel Afshar, Justin Domke
Hamiltonian Monte Carlo (HMC) is a successful approach for sampling from continuous densities.
no code implementations • 1 Oct 2015 • Adrian Weller, Justin Domke
We examine the effect of clamping variables for approximate inference in undirected graphical models with pairwise relationships and discrete variables.
no code implementations • NeurIPS 2015 • Justin Domke
This paper proves that for any exponential family with bounded sufficient statistics, (not just graphical models) when parameters are constrained to a fast-mixing set, gradient descent with gradients approximated by sampling will approximate the maximum likelihood solution inside the set with high-probability.
no code implementations • NeurIPS 2014 • Xianghang Liu, Justin Domke
Markov chain Monte Carlo (MCMC) algorithms are simple and extremely powerful techniques to sample from almost arbitrary distributions.
3 code implementations • 10 Jul 2014 • Aaron J. Defazio, Tibério S. Caetano, Justin Domke
Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem.
no code implementations • NeurIPS 2013 • Justin Domke
A successful approach to structured learning is to write the learning objective as a joint function of linear parameters and inference messages, and iterate between updates to each.
no code implementations • NeurIPS 2013 • Justin Domke, Xianghang Liu
Inference in general Ising models is difficult, due to high treewidth making tree-based algorithms intractable.
no code implementations • NeurIPS 2010 • Justin Domke
This paper proposes a simple and efficient finite difference method for implicit differentiation of marginal inference results in discrete graphical models.