no code implementations • 26 Jul 2024 • Seongho Son, William Bankes, Sayak Ray Chowdhury, Brooks Paige, Ilija Bogunovic
We theoretically analyse the convergence of NS-DPO in the offline setting, providing upper bounds on the estimation error caused by non-stationary preferences.
1 code implementation • 25 Jul 2024 • Chunan Liu, Lilian Denzler, Yihong Chen, Andrew Martin, Brooks Paige
To address this, we propose a novel method, WALLE, which leverages both unstructured modeling from protein language models and structural modeling from graph neural networks.
no code implementations • 17 Jul 2024 • Daniel Tan, David Chanin, Aengus Lynch, Dimitrios Kanoulas, Brooks Paige, Adria Garriga-Alonso, Robert Kirk
In this work, we rigorously investigate these properties, and show that steering vectors have substantial limitations both in- and out-of-distribution.
no code implementations • 2 May 2024 • Maksym Korablyov, Cheng-Hao Liu, Moksh Jain, Almer M. van der Sloot, Eric Jolicoeur, Edward Ruediger, Andrei Cristian Nica, Emmanuel Bengio, Kostiantyn Lapchevskyi, Daniel St-Cyr, Doris Alexandra Schuetz, Victor Ion Butoi, Jarrid Rector-Brooks, Simon Blackburn, Leo Feng, Hadi Nekoei, SaiKrishna Gottipati, Priyesh Vijayan, Prateek Gupta, Ladislav Rampášek, Sasikanth Avancha, Pierre-Luc Bacon, William L. Hamilton, Brooks Paige, Sanchit Misra, Stanislaw Kamil Jastrzebski, Bharat Kaul, Doina Precup, José Miguel Hernández-Lobato, Marwin Segler, Michael Bronstein, Anne Marinier, Mike Tyers, Yoshua Bengio
Despite substantial progress in machine learning for scientific discovery in recent years, truly de novo design of small molecules which exhibit a property of interest remains a significant challenge.
1 code implementation • 2 Mar 2024 • Martin Marek, Brooks Paige, Pavel Izmailov
First, we introduce a "DirClip" prior that is practical to sample and nearly matches the performance of a cold posterior.
1 code implementation • 5 Feb 2024 • Wenlin Chen, Mingtian Zhang, Brooks Paige, José Miguel Hernández-Lobato, David Barber
The inadequate mixing of conventional Markov Chain Monte Carlo (MCMC) methods for multi-modal distributions presents a significant challenge in practical applications such as Bayesian inference and molecular dynamics.
no code implementations • 2 Nov 2023 • Mathieu Alain, So Takao, Brooks Paige, Marc Peter Deisenroth
In this paper, we go beyond this dyadic setting and consider polyadic relations that include interactions between vertices, edges and one of their generalisations, known as cells.
1 code implementation • NeurIPS 2023 • Mingtian Zhang, Alex Hawkins-Hooker, Brooks Paige, David Barber
Energy-Based Models (EBMs) offer a versatile framework for modeling complex data distributions.
no code implementations • 15 Sep 2022 • Mingtian Zhang, Oscar Key, Peter Hayes, David Barber, Brooks Paige, François-Xavier Briol
Score-based divergences have been widely used in machine learning and statistics applications.
no code implementations • 28 May 2022 • Mingtian Zhang, Tim Z. Xiao, Brooks Paige, David Barber
Latent variable models like the Variational Auto-Encoder (VAE) are commonly used to learn representations of images.
no code implementations • 6 Dec 2021 • Alexander Lavin, David Krakauer, Hector Zenil, Justin Gottschlich, Tim Mattson, Johann Brehmer, Anima Anandkumar, Sanjay Choudry, Kamil Rocki, Atılım Güneş Baydin, Carina Prunkl, Brooks Paige, Olexandr Isayev, Erik Peterson, Peter L. McMahon, Jakob Macke, Kyle Cranmer, Jiaxin Zhang, Haruko Wainwright, Adi Hanuka, Manuela Veloso, Samuel Assefa, Stephan Zheng, Avi Pfeffer
We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence.
no code implementations • pproximateinference AABI Symposium 2022 • Giorgos Felekis, Theo Damoulas, Brooks Paige
We study probabilistic Deep Learning methods through the lens of Approximate Bayesian Inference.
1 code implementation • 8 Nov 2021 • Hugh Dance, Brooks Paige
Variable selection in Gaussian processes (GPs) is typically undertaken by thresholding the inverse lengthscales of automatic relevance determination kernels, but in high-dimensional datasets this approach can be unreliable.
1 code implementation • 9 Jun 2021 • Matthew Willetts, Brooks Paige
Surprisingly, we discover side information is not necessary for algorithmic stability: using standard quantitative measures of identifiability, we find deep generative models with latent clusterings are empirically identifiable to the same degree as models which rely on auxiliary labels.
1 code implementation • NeurIPS 2020 • John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin H. S. Segler, José Miguel Hernández-Lobato
When designing new molecules with particular properties, it is not only important what to make but crucially how to make it.
1 code implementation • 25 Nov 2020 • George Lamb, Brooks Paige
Graph neural networks for molecular property prediction are frequently underspecified by data and fail to generalise to new scaffolds at test time.
no code implementations • 24 Nov 2020 • Aneesh Pappu, Brooks Paige
When we find that they are not, we explore pretraining and the meta-learning method MAML (and variants FO-MAML and ANIL) for improving graph neural network performance by transfer learning from related tasks.
no code implementations • NeurIPS 2020 • Amina Mollaysa, Brooks Paige, Alexandros Kalousis
Unfortunately, maximum likelihood training of such models often fails with the samples from the generative model inadequately respecting the input properties.
no code implementations • ICLR 2021 • Yuge Shi, Brooks Paige, Philip H. S. Torr, N. Siddharth
Multimodal learning for generative models often refers to the learning of abstract concepts from the commonality of information in multiple modalities, such as vision and language.
no code implementations • 18 Feb 2020 • Alexander Camuto, Matthew Willetts, Brooks Paige, Chris Holmes, Stephen Roberts
Separating high-dimensional data like images into independent latent factors, i. e independent component analysis (ICA), remains an open research problem.
3 code implementations • NeurIPS 2019 • Yuge Shi, N. Siddharth, Brooks Paige, Philip H. S. Torr
In this work, we characterise successful learning of such models as the fulfillment of four criteria: i) implicit latent decomposition into shared and private subspaces, ii) coherent joint generation over all modalities, iii) coherent cross-generation across individual modalities, and iv) improved model learning for individual modalities through multi-modal integration.
1 code implementation • 6 Nov 2019 • Judith Clymo, Haik Manukian, Nathanaël Fijalkow, Adrià Gascón, Brooks Paige
A particular challenge lies in generating meaningful sets of inputs and outputs, which well-characterize a given program and accurately demonstrate its behavior.
no code implementations • 25 Sep 2019 • Amina Mollaysa, Brooks Paige, Alexandros Kalousis
Though machine learning approaches have shown great success in estimating properties of small molecules, the inverse problem of generating molecules with desired properties remains challenging.
1 code implementation • NeurIPS 2019 • John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin H. S. Segler, José Miguel Hernández-Lobato
Deep generative models are able to suggest new organic molecules by generating strings, trees, and graphs representing their structure.
no code implementations • ICLR Workshop DeepGenStruct 2019 • John Bradshaw, Matt J. Kusner, Brooks Paige, Marwin H. S. Segler, José Miguel Hernández-Lobato
We therefore propose a new molecule generation model, mirroring a more realistic real-world process, where reactants are selected and combined to form more complex molecules.
3 code implementations • 27 Sep 2018 • Jan-Willem van de Meent, Brooks Paige, Hongseok Yang, Frank Wood
We start with a discussion of model-based reasoning and explain why conditioning is a foundational computation central to the fields of probabilistic machine learning and artificial intelligence.
no code implementations • 18 Jul 2018 • Stephen Law, Brooks Paige, Chris Russell
Not only do few quantitative methods exist that can measure the urban environment, but that the collection of such data is both costly and subjective.
no code implementations • ICLR 2019 • John Bradshaw, Matt J. Kusner, Brooks Paige, Marwin H. S. Segler, José Miguel Hernández-Lobato
Chemical reactions can be described as the stepwise redistribution of electrons in molecules.
no code implementations • 6 Apr 2018 • Babak Esmaeili, Hao Wu, Sarthak Jain, Alican Bozkurt, N. Siddharth, Brooks Paige, Dana H. Brooks, Jennifer Dy, Jan-Willem van de Meent
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner.
1 code implementation • ICLR 2018 • David Janz, Jos van der Westhuizen, Brooks Paige, Matt J. Kusner, José Miguel Hernández-Lobato
This validator provides insight as to how individual sequence elements influence the validity of the overall sequence, and can be used to constrain sequence based models to generate valid sequences -- and thus faithfully model discrete objects.
1 code implementation • NeurIPS 2017 • N. Siddharth, Brooks Paige, Jan-Willem van de Meent, Alban Desmaison, Noah D. Goodman, Pushmeet Kohli, Frank Wood, Philip H. S. Torr
We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.
4 code implementations • ICML 2017 • Matt J. Kusner, Brooks Paige, José Miguel Hernández-Lobato
Crucially, state-of-the-art methods often produce outputs that are not valid.
no code implementations • 22 Nov 2016 • N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H. S. Torr
We develop a framework for incorporating structured graphical models in the \emph{encoders} of variational autoencoders (VAEs) that allows us to induce interpretable representations through approximate variational inference.
no code implementations • 21 Nov 2016 • David Janz, Brooks Paige, Tom Rainforth, Jan-Willem van de Meent, Frank Wood
Existing methods for structure discovery in time series data construct interpretable, compositional kernels for Gaussian process regression models.
1 code implementation • 22 Feb 2016 • Brooks Paige, Frank Wood
We introduce a new approach for amortizing inference in directed graphical models by learning heuristic approximations to stochastic inverses, designed specifically for use as proposal distributions in sequential Monte Carlo methods.
1 code implementation • 16 Feb 2016 • Tom Rainforth, Christian A. Naesseth, Fredrik Lindsten, Brooks Paige, Jan-Willem van de Meent, Arnaud Doucet, Frank Wood
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers.
1 code implementation • 11 Oct 2015 • Ingmar Schuster, Heiko Strathmann, Brooks Paige, Dino Sejdinovic
As KSMC does not require access to target gradients, it is particularly applicable on targets whose gradients are unknown or prohibitively expensive.
1 code implementation • 16 Jul 2015 • Jan-Willem van de Meent, Brooks Paige, David Tolpin, Frank Wood
In this work, we explore how probabilistic programs can be used to represent policies in sequential decision problems.
no code implementations • 25 Feb 2015 • David Tolpin, Brooks Paige, Jan Willem van de Meent, Frank Wood
We introduce a new approach to solving path-finding problems under uncertainty by representing them as probabilistic models and applying domain-independent inference algorithms to the models.
1 code implementation • 22 Jan 2015 • David Tolpin, Jan Willem van de Meent, Brooks Paige, Frank Wood
We introduce an adaptive output-sensitive Metropolis-Hastings algorithm for probabilistic models expressed as programs, Adaptive Lightweight Metropolis-Hastings (AdLMH).
no code implementations • NeurIPS 2014 • Brooks Paige, Frank Wood, Arnaud Doucet, Yee Whye Teh
We introduce a new sequential Monte Carlo algorithm we call the particle cascade.
no code implementations • 3 Mar 2014 • Brooks Paige, Frank Wood
Forward inference techniques such as sequential Monte Carlo and particle Markov chain Monte Carlo for probabilistic programming can be implemented in any programming language by creative use of standardized operating system functionality including processes, forking, mutexes, and shared memory.
no code implementations • 28 Jan 2014 • Jan-Willem van de Meent, Brooks Paige, Frank Wood
In this paper we demonstrate that tempering Markov chain Monte Carlo samplers for Bayesian models by recursively subsampling observations without replacement can improve the performance of baseline samplers in terms of effective sample size per computation.
no code implementations • NeurIPS 2013 • Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski
We develop an inference and optimal design procedure for recovering synaptic weights in neural microcircuits.