Search Results for author: Rob Brekelmans

Found 19 papers, 11 papers with code

Disentangled Representations via Synergy Minimization

1 code implementation10 Oct 2017 Greg Ver Steeg, Rob Brekelmans, Hrayr Harutyunyan, Aram Galstyan

Scientists often seek simplified representations of complex systems to facilitate prediction and understanding.

Auto-Encoding Total Correlation Explanation

no code implementations16 Feb 2018 Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, Aram Galstyan

Advances in unsupervised learning enable reconstruction and generation of samples from complex distributions, but this success is marred by the inscrutability of the representations learned.

Disentanglement

Invariant Representations without Adversarial Training

1 code implementation NeurIPS 2018 Daniel Moyer, Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, Aram Galstyan

Representations of data that are invariant to changes in specified factors are useful for a wide range of problems: removing potential biases in prediction problems, controlling the effects of covariates, and disentangling meaningful factors of variation.

Representation Learning

Exact Rate-Distortion in Autoencoders via Echo Noise

1 code implementation NeurIPS 2019 Rob Brekelmans, Daniel Moyer, Aram Galstyan, Greg Ver Steeg

The noise is constructed in a data-driven fashion that does not require restrictive distributional assumptions.

Representation Learning

All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference

1 code implementation1 Jul 2020 Rob Brekelmans, Vaden Masrani, Frank Wood, Greg Ver Steeg, Aram Galstyan

We propose to choose intermediate distributions using equal spacing in the moment parameters of our exponential family, which matches grid search performance and allows the schedule to adaptively update over the course of training.

Variational Inference

Gaussian Process Bandit Optimization of the Thermodynamic Variational Objective

1 code implementation NeurIPS 2020 Vu Nguyen, Vaden Masrani, Rob Brekelmans, Michael A. Osborne, Frank Wood

Achieving the full promise of the Thermodynamic Variational Objective (TVO), a recently proposed variational lower bound on the log evidence involving a one-dimensional Riemann integral approximation, requires choosing a "schedule" of sorted discretization points.

Annealed Importance Sampling with q-Paths

2 code implementations NeurIPS Workshop DL-IG 2020 Rob Brekelmans, Vaden Masrani, Thang Bui, Frank Wood, Aram Galstyan, Greg Ver Steeg, Frank Nielsen

Annealed importance sampling (AIS) is the gold standard for estimating partition functions or marginal likelihoods, corresponding to importance sampling over a path of distributions between a tractable base and an unnormalized target.

Likelihood Ratio Exponential Families

no code implementations NeurIPS Workshop DL-IG 2020 Rob Brekelmans, Frank Nielsen, Alireza Makhzani, Aram Galstyan, Greg Ver Steeg

The exponential family is well known in machine learning and statistical physics as the maximum entropy distribution subject to a set of observed constraints, while the geometric mixture path is common in MCMC methods such as annealed importance sampling.

LEMMA

Stochastic Approximation of Gaussian Free Energy for Risk-Sensitive Reinforcement Learning

no code implementations NeurIPS 2021 Grégoire Delétang, Jordi Grau-Moya, Markus Kunesch, Tim Genewein, Rob Brekelmans, Shane Legg, Pedro A Ortega

Since the Gaussian free energy is known to be a certainty-equivalent sensitive to the mean and the variance, the learning rule has applications in risk-sensitive decision-making.

Decision Making reinforcement-learning +1

q-Paths: Generalizing the Geometric Annealing Path using Power Means

1 code implementation1 Jul 2021 Vaden Masrani, Rob Brekelmans, Thang Bui, Frank Nielsen, Aram Galstyan, Greg Ver Steeg, Frank Wood

Many common machine learning methods involve the geometric annealing path, a sequence of intermediate densities between two distributions of interest constructed using the geometric average.

Bayesian Inference

Model-Free Risk-Sensitive Reinforcement Learning

no code implementations4 Nov 2021 Grégoire Delétang, Jordi Grau-Moya, Markus Kunesch, Tim Genewein, Rob Brekelmans, Shane Legg, Pedro A. Ortega

Since the Gaussian free energy is known to be a certainty-equivalent sensitive to the mean and the variance, the learning rule has applications in risk-sensitive decision-making.

Decision Making reinforcement-learning +1

Your Policy Regularizer is Secretly an Adversary

no code implementations23 Mar 2022 Rob Brekelmans, Tim Genewein, Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Shane Legg, Pedro Ortega

Policy regularization methods such as maximum entropy regularization are widely used in reinforcement learning to improve the robustness of a learned policy.

Variational Representations of Annealing Paths: Bregman Information under Monotonic Embedding

no code implementations15 Sep 2022 Rob Brekelmans, Frank Nielsen

Markov Chain Monte Carlo methods for sampling from complex distributions and estimating normalization constants often simulate samples from a sequence of intermediate distributions along an annealing path, which bridges between a tractable initial distribution and a target density of interest.

Action Matching: Learning Stochastic Dynamics from Samples

1 code implementation13 Oct 2022 Kirill Neklyudov, Rob Brekelmans, Daniel Severo, Alireza Makhzani

Learning the continuous dynamics of a system from snapshots of its temporal marginals is a problem which appears throughout natural sciences and machine learning, including in quantum systems, single-cell biological data, and generative modeling.

Colorization Super-Resolution

Information-Theoretic Diffusion

1 code implementation7 Feb 2023 Xianghao Kong, Rob Brekelmans, Greg Ver Steeg

Denoising diffusion models have spurred significant gains in density modeling and image generation, precipitating an industrial revolution in text-guided AI art generation.

Denoising Image Generation +1

Improving Mutual Information Estimation with Annealed and Energy-Based Bounds

1 code implementation ICLR 2022 Rob Brekelmans, Sicong Huang, Marzyeh Ghassemi, Greg Ver Steeg, Roger Grosse, Alireza Makhzani

Since accurate estimation of MI without density information requires a sample size exponential in the true MI, we assume either a single marginal or the full joint density information is known.

Mutual Information Estimation

A Computational Framework for Solving Wasserstein Lagrangian Flows

1 code implementation16 Oct 2023 Kirill Neklyudov, Rob Brekelmans, Alexander Tong, Lazar Atanackovic, Qiang Liu, Alireza Makhzani

The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry ($\textit{kinetic energy}$), and the regularization of density paths ($\textit{potential energy}$).

All in the (Exponential) Family: Information Geometry and Thermodynamic Variational Inference

no code implementations ICML 2020 Rob Brekelmans, Vaden Masrani, Frank Wood, Greg Ver Steeg, Aram Galstyan

While the Evidence Lower Bound (ELBO) has become a ubiquitous objective for variational inference, the recently proposed Thermodynamic Variational Objective (TVO) leverages thermodynamic integration to provide a tighter and more general family of bounds.

Scheduling Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.