no code implementations • 11 Apr 2022 • Nan Rosemary Ke, Silvia Chiappa, Jane Wang, Anirudh Goyal, Jorg Bornschein, Melanie Rey, Theophane Weber, Matthew Botvinic, Michael Mozer, Danilo Jimenez Rezende
The fundamental challenge in causal induction is to infer the underlying graph structure given observational and/or interventional data.
no code implementations • 20 Jan 2021 • Michael S. Albergo, Denis Boyda, Daniel C. Hackett, Gurtej Kanwar, Kyle Cranmer, Sébastien Racanière, Danilo Jimenez Rezende, Phiala E. Shanahan
This notebook tutorial demonstrates a method for sampling Boltzmann distributions of lattice field theories using a class of machine learning models known as normalizing flows.
no code implementations • 12 Aug 2020 • Denis Boyda, Gurtej Kanwar, Sébastien Racanière, Danilo Jimenez Rezende, Michael S. Albergo, Kyle Cranmer, Daniel C. Hackett, Phiala E. Shanahan
We develop a flow-based sampling algorithm for $SU(N)$ lattice gauge theories that is gauge-invariant by construction.
no code implementations • 30 Mar 2020 • Karen Ullrich, Fabio Viola, Danilo Jimenez Rezende
Reliably transmitting messages despite information loss due to a noisy channel is a core problem of information theory.
no code implementations • 13 Mar 2020 • Gurtej Kanwar, Michael S. Albergo, Denis Boyda, Kyle Cranmer, Daniel C. Hackett, Sébastien Racanière, Danilo Jimenez Rezende, Phiala E. Shanahan
We define a class of machine-learned flow-based sampling algorithms for lattice gauge theories that are gauge-invariant by construction.
no code implementations • 12 Feb 2020 • Peter Wirnsberger, Andrew J. Ballard, George Papamakarios, Stuart Abercrombie, Sébastien Racanière, Alexander Pritzel, Danilo Jimenez Rezende, Charles Blundell
Here, we cast Targeted FEP as a machine learning problem in which the mapping is parameterized as a neural network that is optimized so as to increase overlap.
4 code implementations • ICML 2020 • Danilo Jimenez Rezende, George Papamakarios, Sébastien Racanière, Michael S. Albergo, Gurtej Kanwar, Phiala E. Shanahan, Kyle Cranmer
Normalizing flows are a powerful tool for building expressive distributions in high dimensions.
6 code implementations • 5 Dec 2019 • George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan
In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference.
no code implementations • 2 Dec 2019 • Slava Voloshynovskiy, Mouad Kondah, Shideh Rezaeifar, Olga Taran, Taras Holotyak, Danilo Jimenez Rezende
In particular, we present a new interpretation of VAE family based on the IB framework using a direct decomposition of mutual information terms and show some interesting connections to existing methods such as VAE [2; 3], beta-VAE [11], AAE [12], InfoVAE [5] and VAE/GAN [13].
1 code implementation • ICLR 2020 • Peter Toth, Danilo Jimenez Rezende, Andrew Jaegle, Sébastien Racanière, Aleksandar Botev, Irina Higgins
The Hamiltonian formalism plays a central role in classical and quantum physics.
no code implementations • 30 Sep 2019 • Danilo Jimenez Rezende, Sébastien Racanière, Irina Higgins, Peter Toth
This paper introduces equivariant hamiltonian flows, a method for learning expressive densities that are invariant with respect to a known Lie-algebra of local symmetry transformations while providing an equivariant representation of the data.
no code implementations • NeurIPS 2019 • Karol Gregor, Danilo Jimenez Rezende, Frederic Besse, Yan Wu, Hamza Merzic, Aaron van den Oord
We propose a way to efficiently train expressive generative models in complex environments.
4 code implementations • 30 May 2019 • Simon A. A. Kohl, Bernardino Romera-Paredes, Klaus H. Maier-Hein, Danilo Jimenez Rezende, S. M. Ali Eslami, Pushmeet Kohli, Andrew Zisserman, Olaf Ronneberger
Medical imaging only indirectly measures the molecular identity of the tissue within each voxel, which often produces only ambiguous image evidence for target measures of interest, like semantic segmentation.
3 code implementations • 1 Oct 2018 • Danilo Jimenez Rezende, Fabio Viola
In spite of remarkable progress in deep latent variable generative modeling, training still remains a challenge due to a combination of optimization and generalization issues.
9 code implementations • NeurIPS 2018 • Simon A. A. Kohl, Bernardino Romera-Paredes, Clemens Meyer, Jeffrey De Fauw, Joseph R. Ledsam, Klaus H. Maier-Hein, S. M. Ali Eslami, Danilo Jimenez Rezende, Olaf Ronneberger
To this end we propose a generative segmentation model based on a combination of a U-Net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible hypotheses.
no code implementations • ICML 2018 • Marco Fraccaro, Danilo Jimenez Rezende, Yori Zwols, Alexander Pritzel, S. M. Ali Eslami, Fabio Viola
In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent's representations during training or via use as part of an explicit planning mechanism.
2 code implementations • NeurIPS 2017 • Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • 22 Nov 2016 • Karol Gregor, Danilo Jimenez Rezende, Daan Wierstra
In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent.
Reinforcement Learning (RL) Unsupervised Reinforcement Learning
1 code implementation • NeurIPS 2016 • Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, Nicolas Heess
A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world.
1 code implementation • NeurIPS 2016 • Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, Daan Wierstra
We introduce a simple recurrent variational auto-encoder architecture that significantly improves image modeling.
Ranked #64 on Image Generation on CIFAR-10 (bits/dimension metric)
no code implementations • 16 Mar 2016 • Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, Daan Wierstra
In particular, humans have an ability for one-shot generalization: an ability to encounter a new concept, understand its structure, and then be able to generate compelling alternative variations of the concept.
2 code implementations • NeurIPS 2015 • Shakir Mohamed, Danilo Jimenez Rezende
The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents.
16 code implementations • 21 May 2015 • Danilo Jimenez Rezende, Shakir Mohamed
The choice of approximate posterior distribution is one of the core problems in variational inference.
20 code implementations • 16 Feb 2015 • Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation.
Ranked #70 on Image Generation on CIFAR-10 (bits/dimension metric)
5 code implementations • 16 Jan 2014 • Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning.