1 code implementation • 8 Jan 2024 • Jason Yim, Andrew Campbell, Emile Mathieu, Andrew Y. K. Foong, Michael Gastegger, José Jiménez-Luna, Sarah Lewis, Victor Garcia Satorras, Bastiaan S. Veeling, Frank Noé, Regina Barzilay, Tommi S. Jaakkola
The first is motif amortization, in which FrameFlow is trained with the motif as input using a data augmentation strategy.
no code implementations • 14 Dec 2023 • Kieran Didi, Francisco Vargas, Simon V Mathis, Vincent Dutordoir, Emile Mathieu, Urszula J Komorowska, Pietro Lio
Many protein design applications, such as binder or enzyme design, require scaffolding a structural motif with high precision.
1 code implementation • NeurIPS 2023 • Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato
Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems.
1 code implementation • 11 Apr 2023 • Nic Fishman, Leo Klarner, Valentin De Bortoli, Emile Mathieu, Michael Hutchinson
Denoising diffusion models are a novel class of generative algorithms that achieve state-of-the-art performance across a range of domains, including image generation and text-to-image tasks.
1 code implementation • 5 Feb 2023 • Jason Yim, Brian L. Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, Tommi Jaakkola
The design of novel protein structures remains a challenge in protein engineering for applications across biomedicine and chemistry.
no code implementations • 28 Sep 2022 • Angus Phillips, Thomas Seror, Michael Hutchinson, Valentin De Bortoli, Arnaud Doucet, Emile Mathieu
Score-based generative modelling (SGM) has proven to be a very effective method for modelling densities on finite-dimensional spaces.
no code implementations • 7 Jul 2022 • James Thornton, Michael Hutchinson, Emile Mathieu, Valentin De Bortoli, Yee Whye Teh, Arnaud Doucet
Our proposed method generalizes Diffusion Schr\"odinger Bridge introduced in \cite{debortoli2021neurips} to the non-Euclidean setting and extends Riemannian score-based models beyond the first time reversal.
1 code implementation • 31 May 2022 • Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim
We introduce InstaAug, a method for automatically learning input-specific augmentations from data.
2 code implementations • 6 Feb 2022 • Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet
Score-based generative models (SGMs) are a powerful class of generative models that exhibit remarkable empirical performance.
1 code implementation • ICLR 2022 • Ning Miao, Emile Mathieu, N. Siddharth, Yee Whye Teh, Tom Rainforth
InteL-VAEs use an intermediary set of latent variables to control the stochasticity of the encoding process, before mapping these in turn to the latent representation using a parametric function that encapsulates our desired inductive bias(es).
1 code implementation • NeurIPS 2021 • Emile Mathieu, Adam Foster, Yee Whye Teh
Learning representations of stochastic processes is an emerging problem in machine learning with applications from meta-learning to physical object models to time series.
no code implementations • NeurIPS 2020 • Emile Mathieu, Maximilian Nickel
Normalizing flows have shown great promise for modelling flexible probability distributions in a computationally tractable way.
4 code implementations • NeurIPS 2019 • Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh
We therefore endow VAEs with a Poincar\'e ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space.
1 code implementation • 6 Dec 2018 • Emile Mathieu, Tom Rainforth, N. Siddharth, Yee Whye Teh
We develop a generalisation of disentanglement in VAEs---decomposition of the latent representation---characterising it as the fulfilment of two factors: a) the latent encodings of the data having an appropriate level of overlap, and b) the aggregate encoding of the data conforming to a desired structure, represented through the prior.
1 code implementation • 9 Jul 2018 • Benjamin Bloem-Reddy, Adam Foster, Emile Mathieu, Yee Whye Teh
Empirical evidence suggests that heavy-tailed degree distributions occurring in many real networks are well-approximated by power laws with exponents $\eta$ that may take values either less than and greater than two.