Search Results for author: Laurence Illing Midgley

Found 3 papers, 3 papers with code

Flow Annealed Importance Sampling Bootstrap

3 code implementations3 Aug 2022 Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are tractable density models that can approximate complicated target distributions, e. g. Boltzmann distributions of physical systems.

Bootstrap Your Flow

1 code implementation pproximateinference AABI Symposium 2022 Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, José Miguel Hernández-Lobato

Normalizing flows are flexible, parameterized distributions that can be used to approximate expectations from intractable distributions via importance sampling.

Normalising Flows

Deep Reinforcement Learning for Process Synthesis

2 code implementations23 Sep 2020 Laurence Illing Midgley

This paper demonstrates the application of reinforcement learning (RL) to process synthesis by presenting Distillation Gym, a set of RL environments in which an RL agent is tasked with designing a distillation train, given a user defined multi-component feed stream.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.