Search Results for author: Emile Mathieu

Found 15 papers, 11 papers with code

SE(3) Equivariant Augmented Coupling Flows

1 code implementation NeurIPS 2023 Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems.

Diffusion Models for Constrained Domains

1 code implementation11 Apr 2023 Nic Fishman, Leo Klarner, Valentin De Bortoli, Emile Mathieu, Michael Hutchinson

Denoising diffusion models are a novel class of generative algorithms that achieve state-of-the-art performance across a range of domains, including image generation and text-to-image tasks.

Denoising Image Generation +2

SE(3) diffusion model with application to protein backbone generation

1 code implementation5 Feb 2023 Jason Yim, Brian L. Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, Tommi Jaakkola

The design of novel protein structures remains a challenge in protein engineering for applications across biomedicine and chemistry.

Protein Structure Prediction

Spectral Diffusion Processes

no code implementations28 Sep 2022 Angus Phillips, Thomas Seror, Michael Hutchinson, Valentin De Bortoli, Arnaud Doucet, Emile Mathieu

Score-based generative modelling (SGM) has proven to be a very effective method for modelling densities on finite-dimensional spaces.

Dimensionality Reduction

Riemannian Diffusion Schrödinger Bridge

no code implementations7 Jul 2022 James Thornton, Michael Hutchinson, Emile Mathieu, Valentin De Bortoli, Yee Whye Teh, Arnaud Doucet

Our proposed method generalizes Diffusion Schr\"odinger Bridge introduced in \cite{debortoli2021neurips} to the non-Euclidean setting and extends Riemannian score-based models beyond the first time reversal.

Density Estimation

Riemannian Score-Based Generative Modelling

2 code implementations6 Feb 2022 Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet

Score-based generative models (SGMs) are a powerful class of generative models that exhibit remarkable empirical performance.

Denoising

On Incorporating Inductive Biases into VAEs

1 code implementation ICLR 2022 Ning Miao, Emile Mathieu, N. Siddharth, Yee Whye Teh, Tom Rainforth

InteL-VAEs use an intermediary set of latent variables to control the stochasticity of the encoding process, before mapping these in turn to the latent representation using a parametric function that encapsulates our desired inductive bias(es).

Inductive Bias

On Contrastive Representations of Stochastic Processes

1 code implementation NeurIPS 2021 Emile Mathieu, Adam Foster, Yee Whye Teh

Learning representations of stochastic processes is an emerging problem in machine learning with applications from meta-learning to physical object models to time series.

Meta-Learning Time Series +1

Riemannian Continuous Normalizing Flows

no code implementations NeurIPS 2020 Emile Mathieu, Maximilian Nickel

Normalizing flows have shown great promise for modelling flexible probability distributions in a computationally tractable way.

Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders

4 code implementations NeurIPS 2019 Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh

We therefore endow VAEs with a Poincar\'e ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space.

Disentangling Disentanglement in Variational Autoencoders

1 code implementation6 Dec 2018 Emile Mathieu, Tom Rainforth, N. Siddharth, Yee Whye Teh

We develop a generalisation of disentanglement in VAEs---decomposition of the latent representation---characterising it as the fulfilment of two factors: a) the latent encodings of the data having an appropriate level of overlap, and b) the aggregate encoding of the data conforming to a desired structure, represented through the prior.

Clustering Disentanglement

Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks

1 code implementation9 Jul 2018 Benjamin Bloem-Reddy, Adam Foster, Emile Mathieu, Yee Whye Teh

Empirical evidence suggests that heavy-tailed degree distributions occurring in many real networks are well-approximated by power laws with exponents $\eta$ that may take values either less than and greater than two.

Cannot find the paper you are looking for? You can Submit a new open access paper.