Search Results for author: Justin Domke

Found 37 papers, 6 papers with code

Sparse Covariance Modeling in High Dimensions with Gaussian Processes

no code implementations NeurIPS 2018 Rui Li, Kishan Kc, Feng Cui, Justin Domke, Anne Haake

This paper studies statistical relationships among components of high-dimensional observations varying across non-random covariates.

Gaussian Processes Vocal Bursts Intensity Prediction

Simulation-based stacking

no code implementations25 Oct 2023 Yuling Yao, Bruno Régaldo-Saint Blancard, Justin Domke

Simulation-based inference has been popular for amortized Bayesian computation.

Discriminative calibration: Check Bayesian computation from simulations and flexible classifier

1 code implementation NeurIPS 2023 Yuling Yao, Justin Domke

To check the accuracy of Bayesian computations, it is common to use rank-based simulation-based calibration (SBC).

Variational Inference

Sample Average Approximation for Black-Box VI

no code implementations13 Apr 2023 Javier Burroni, Justin Domke, Daniel Sheldon

We present a novel approach for black-box VI that bypasses the difficulties of stochastic gradient ascent, including the task of selecting step-sizes.

Stochastic Optimization

Joint control variate for faster black-box variational inference

1 code implementation13 Oct 2022 Xi Wang, Tomas Geffner, Justin Domke

Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance.

Stochastic Optimization Variational Inference

Langevin Diffusion Variational Inference

no code implementations16 Aug 2022 Tomas Geffner, Justin Domke

In fact, using our formulation we propose a new method that combines the strengths of previously existing algorithms; it uses underdamped Langevin transitions and powerful augmentations parameterized by a score network.

Variational Inference

Amortized Variational Inference for Simple Hierarchical Models

no code implementations NeurIPS 2021 Abhinav Agrawal, Justin Domke

It is difficult to use subsampling with variational inference in hierarchical models since the number of local latent variables scales with the dataset.

Variational Inference

Variational Marginal Particle Filters

1 code implementation30 Sep 2021 Jinlin Lai, Justin Domke, Daniel Sheldon

We reveal that the marginal particle filter is obtained from sequential Monte Carlo by applying Rao-Blackwellization operations, which sacrifices the trajectory information for reduced variance and differentiability.

Variational Inference

MCMC Variational Inference via Uncorrected Hamiltonian Annealing

no code implementations NeurIPS 2021 Tomas Geffner, Justin Domke

Given an unnormalized target distribution we want to obtain approximate samples from it and a tight lower bound on its (log) normalization constant log Z. Annealed Importance Sampling (AIS) with Hamiltonian MCMC is a powerful method that can be used to do this.

Variational Inference

On the Difficulty of Unbiased Alpha Divergence Minimization

no code implementations19 Oct 2020 Tomas Geffner, Justin Domke

In this work we study unbiased methods for alpha-divergence minimization through the Signal-to-Noise Ratio (SNR) of the gradient estimator.

Moment-Matching Conditions for Exponential Families with Conditioning or Hidden Data

no code implementations7 Jan 2020 Justin Domke

Maximum likelihood learning with exponential families leads to moment-matching of the sufficient statistics, a classic result.

Thompson Sampling and Approximate Inference

no code implementations NeurIPS 2019 My Phan, Yasin Abbasi Yadkori, Justin Domke

We study the effects of approximate inference on the performance of Thompson sampling in the $k$-armed bandit problems.

Decision Making Thompson Sampling

A Rule for Gradient Estimator Selection, with an Application to Variational Inference

no code implementations5 Nov 2019 Tomas Geffner, Justin Domke

Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.

Variational Inference

Automatically Trading off Time and Variance when Selecting Gradient Estimators

no code implementations pproximateinference AABI Symposium 2019 Tomas Geffner, Justin Domke

Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.

Thompson Sampling with Approximate Inference

no code implementations NeurIPS 2019 My Phan, Yasin Abbasi-Yadkori, Justin Domke

We study the effects of approximate inference on the performance of Thompson sampling in the $k$-armed bandit problems.

Decision Making Thompson Sampling

Divide and Couple: Using Monte Carlo Variational Objectives for Posterior Approximation

no code implementations NeurIPS 2019 Justin Domke, Daniel Sheldon

Recent work in variational inference (VI) uses ideas from Monte Carlo estimation to tighten the lower bounds on the log-likelihood that are used as objectives.

Variational Inference

Provable Gradient Variance Guarantees for Black-Box Variational Inference

no code implementations NeurIPS 2019 Justin Domke

Recent variational inference methods use stochastic gradient estimators whose variance is not well understood.

Variational Inference

Provable Smoothness Guarantees for Black-Box Variational Inference

no code implementations ICML 2020 Justin Domke

Black-box variational inference tries to approximate a complex target distribution though a gradient-based optimization of the parameters of a simpler distribution.

Variational Inference

Importance Weighting and Variational Inference

no code implementations NeurIPS 2018 Justin Domke, Daniel Sheldon

Recent work used importance sampling ideas for better variational bounds on likelihoods.

Variational Inference

Conditional Inference in Pre-trained Variational Autoencoders via Cross-coding

1 code implementation ICLR 2019 Ga Wu, Justin Domke, Scott Sanner

Variational Autoencoders (VAEs) are a popular generative model, but one in which conditional inference can be challenging.

Reflection, Refraction, and Hamiltonian Monte Carlo

no code implementations NeurIPS 2015 Hadi Mohasel Afshar, Justin Domke

Hamiltonian Monte Carlo (HMC) is a successful approach for sampling from continuous densities.

Clamping Improves TRW and Mean Field Approximations

no code implementations1 Oct 2015 Adrian Weller, Justin Domke

We examine the effect of clamping variables for approximate inference in undirected graphical models with pairwise relationships and discrete variables.

Maximum Likelihood Learning With Arbitrary Treewidth via Fast-Mixing Parameter Sets

no code implementations NeurIPS 2015 Justin Domke

This paper proves that for any exponential family with bounded sufficient statistics, (not just graphical models) when parameters are constrained to a fast-mixing set, gradient descent with gradients approximated by sampling will approximate the maximum likelihood solution inside the set with high-probability.

Projecting Markov Random Field Parameters for Fast Mixing

no code implementations NeurIPS 2014 Xianghang Liu, Justin Domke

Markov chain Monte Carlo (MCMC) algorithms are simple and extremely powerful techniques to sample from almost arbitrary distributions.

Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems

3 code implementations10 Jul 2014 Aaron J. Defazio, Tibério S. Caetano, Justin Domke

Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem.

Structured Learning via Logistic Regression

no code implementations NeurIPS 2013 Justin Domke

A successful approach to structured learning is to write the learning objective as a joint function of linear parameters and inference messages, and iterate between updates to each.

regression

Projecting Ising Model Parameters for Fast Mixing

no code implementations NeurIPS 2013 Justin Domke, Xianghang Liu

Inference in general Ising models is difficult, due to high treewidth making tree-based algorithms intractable.

Implicit Differentiation by Perturbation

no code implementations NeurIPS 2010 Justin Domke

This paper proposes a simple and efficient finite difference method for implicit differentiation of marginal inference results in discrete graphical models.

Cannot find the paper you are looking for? You can Submit a new open access paper.