Search Results for author: Tomas Geffner

Found 14 papers, 5 papers with code

Joint control variate for faster black-box variational inference

1 code implementation13 Oct 2022 Xi Wang, Tomas Geffner, Justin Domke

Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance.

Stochastic Optimization Variational Inference

Compositional Score Modeling for Simulation-based Inference

1 code implementation28 Sep 2022 Tomas Geffner, George Papamakarios, andriy mnih

Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations.

Variational Inference

Langevin Diffusion Variational Inference

no code implementations16 Aug 2022 Tomas Geffner, Justin Domke

In fact, using our formulation we propose a new method that combines the strengths of previously existing algorithms; it uses underdamped Langevin transitions and powerful augmentations parameterized by a score network.

Variational Inference

Deep End-to-end Causal Inference

1 code implementation4 Feb 2022 Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang

Causal inference is essential for data-driven decision making across domains such as business engagement, medical treatment and policy making.

Causal Discovery Causal Inference +1

FCause: Flow-based Causal Discovery

no code implementations29 Sep 2021 Tomas Geffner, Emre Kiciman, Angus Lamb, Martin Kukla, Miltiadis Allamanis, Cheng Zhang

Current causal discovery methods either fail to scale, model only limited forms of functional relationships, or cannot handle missing values.

Causal Discovery

MCMC Variational Inference via Uncorrected Hamiltonian Annealing

no code implementations NeurIPS 2021 Tomas Geffner, Justin Domke

Given an unnormalized target distribution we want to obtain approximate samples from it and a tight lower bound on its (log) normalization constant log Z. Annealed Importance Sampling (AIS) with Hamiltonian MCMC is a powerful method that can be used to do this.

Variational Inference

On the Difficulty of Unbiased Alpha Divergence Minimization

no code implementations19 Oct 2020 Tomas Geffner, Justin Domke

In this work we study unbiased methods for alpha-divergence minimization through the Signal-to-Noise Ratio (SNR) of the gradient estimator.

A Rule for Gradient Estimator Selection, with an Application to Variational Inference

no code implementations5 Nov 2019 Tomas Geffner, Justin Domke

Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.

Variational Inference

Automatically Trading off Time and Variance when Selecting Gradient Estimators

no code implementations pproximateinference AABI Symposium 2019 Tomas Geffner, Justin Domke

Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.

Compact Policies for Fully-Observable Non-Deterministic Planning as SAT

1 code implementation25 Jun 2018 Tomas Geffner, Hector Geffner

Fully observable non-deterministic (FOND) planning is becoming increasingly important as an approach for computing proper policies in probabilistic planning, extended temporal plans in LTL planning, and general plans in generalized planning.

Cannot find the paper you are looking for? You can Submit a new open access paper.