no code implementations • 12 Feb 2024 • Matthew Niedoba, Dylan Green, Saeid Naderiparizi, Vasileios Lioutas, Jonathan Wilder Lavington, Xiaoxuan Liang, Yunpeng Liu, Ke Zhang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Score function estimation is the cornerstone of both training and sampling from diffusion generative models.
no code implementations • 19 May 2023 • Yunpeng Liu, Vasileios Lioutas, Jonathan Wilder Lavington, Matthew Niedoba, Justice Sefas, Setareh Dabiri, Dylan Green, Xiaoxuan Liang, Berend Zwartsenberg, Adam Ścibior, Frank Wood
The development of algorithms that learn multi-agent behavioral models using human demonstrations has led to increasingly realistic simulations in the field of autonomous driving.
no code implementations • 17 Jun 2022 • Berend Zwartsenberg, Adam Ścibior, Matthew Niedoba, Vasileios Lioutas, Yunpeng Liu, Justice Sefas, Setareh Dabiri, Jonathan Wilder Lavington, Trevor Campbell, Frank Wood
We present a novel, conditional generative probabilistic model of set-valued data with a tractable log density.
no code implementations • 18 Jun 2021 • Adam Ścibior, Frank Wood
Particle filters are not compatible with automatic differentiation due to the presence of discrete resampling steps.
1 code implementation • 31 Dec 2020 • Andrew Warrington, J. Wilder Lavington, Adam Ścibior, Mark Schmidt, Frank Wood
Policies for partially observed Markov decision processes can be efficiently learned by imitating policies for the corresponding fully observed Markov decision processes.
no code implementations • 25 Oct 2019 • Andreas Munk, Berend Zwartsenberg, Adam Ścibior, Atılım Güneş Baydin, Andrew Stewart, Goran Fernlund, Anoush Poursartip, Frank Wood
Our surrogates target stochastic simulators where the number of random variables itself can be stochastic and potentially unbounded.
1 code implementation • 20 Oct 2019 • Saeid Naderiparizi, Adam Ścibior, Andreas Munk, Mehrdad Ghadiri, Atılım Güneş Baydin, Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood
Naive approaches to amortized inference in probabilistic programs with unbounded loops can produce estimators with infinite variance.
no code implementations • 14 Nov 2018 • Eli Sennesh, Adam Ścibior, Hao Wu, Jan-Willem van de Meent
We assume that models are dynamic, but that model composition is static, in the sense that combinator application takes place prior to evaluating the model on data.
no code implementations • NeurIPS 2016 • Carl-Johann Simon-Gabriel, Adam Ścibior, Ilya Tolstikhin, Bernhard Schölkopf
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings.