no code implementations • 27 Feb 2025 • Saeid Naderiparizi, Xiaoxuan Liang, Berend Zwartsenberg, Frank Wood
We develop a mechanism for combining multiple such constraints so that the resulting multiply-constrained model remains a manual bridge that respects all constraints.
no code implementations • 7 Jun 2024 • Jason Yoo, Yingchen He, Saeid Naderiparizi, Dylan Green, Gido M. van de Ven, Geoff Pleiss, Frank Wood
This work demonstrates that training autoregressive video diffusion models from a single, continuous video stream is not only possible but remarkably can also be competitive with standard offline training approaches given the same number of gradient steps.
no code implementations • 7 May 2024 • Jonathan Wilder Lavington, Ke Zhang, Vasileios Lioutas, Matthew Niedoba, Yunpeng Liu, Dylan Green, Saeid Naderiparizi, Xiaoxuan Liang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Moreover, because of the high variability between different problems presented in different autonomous systems, these simulators need to be easy to use, and easy to modify.
no code implementations • 30 Apr 2024 • Dylan Green, William Harvey, Saeid Naderiparizi, Matthew Niedoba, Yunpeng Liu, Xiaoxuan Liang, Jonathan Lavington, Ke Zhang, Vasileios Lioutas, Setareh Dabiri, Adam Scibior, Berend Zwartsenberg, Frank Wood
Current state-of-the-art methods for video inpainting typically rely on optical flow or attention-based approaches to inpaint masked regions by propagating visual information across frames.
1 code implementation • 12 Feb 2024 • Matthew Niedoba, Dylan Green, Saeid Naderiparizi, Vasileios Lioutas, Jonathan Wilder Lavington, Xiaoxuan Liang, Yunpeng Liu, Ke Zhang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Score function estimation is the cornerstone of both training and sampling from diffusion generative models.
no code implementations • 31 Jul 2023 • Saeid Naderiparizi, Xiaoxuan Liang, Berend Zwartsenberg, Frank Wood
The maximum likelihood principle advocates parameter estimation via optimization of the data likelihood function.
1 code implementation • 20 Dec 2022 • Sachin Mehta, Saeid Naderiparizi, Fartash Faghri, Maxwell Horton, Lailin Chen, Ali Farhadi, Oncel Tuzel, Mohammad Rastegari
To answer the open question on the importance of magnitude ranges for each augmentation operation, we introduce RangeAugment that allows us to efficiently learn the range of magnitudes for individual as well as composite augmentation operations.
1 code implementation • 23 May 2022 • William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, Frank Wood
We present a framework for video modeling based on denoising diffusion probabilistic models that produces long-duration video completions in a variety of realistic environments.
Ranked #3 on
Rain Removal
on Nightrain
1 code implementation • ICLR 2022 • William Harvey, Saeid Naderiparizi, Frank Wood
We present a conditional variational auto-encoder (VAE) which, to avoid the substantial cost of training from scratch, uses an architecture and training objective capable of leveraging a foundation model in the form of a pretrained unconditional VAE.
no code implementations • 8 Oct 2020 • Saeid Naderiparizi, Kenny Chiu, Benjamin Bloem-Reddy, Frank Wood
We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large.
1 code implementation • 30 Mar 2020 • Frank Wood, Andrew Warrington, Saeid Naderiparizi, Christian Weilbach, Vaden Masrani, William Harvey, Adam Scibior, Boyan Beronov, John Grefenstette, Duncan Campbell, Ali Nasseri
In this work we demonstrate how to automate parts of the infectious disease-control policy-making process via performing inference in existing epidemiological models.
1 code implementation • 28 Mar 2020 • Andrew Warrington, Saeid Naderiparizi, Frank Wood
Deterministic models are approximations of reality that are easy to interpret and often easier to build than stochastic alternatives.
1 code implementation • 20 Oct 2019 • Saeid Naderiparizi, Adam Ścibior, Andreas Munk, Mehrdad Ghadiri, Atılım Güneş Baydin, Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood
Naive approaches to amortized inference in probabilistic programs with unbounded loops can produce estimators with infinite variance.
no code implementations • pproximateinference AABI Symposium 2019 • Andrew Warrington, Saeid Naderiparizi, Frank Wood
Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives.
no code implementations • pproximateinference AABI Symposium 2019 • Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Saeid Naderiparizi, Adam Scibior, Andreas Munk, Frank Wood, Mehrdad Ghadiri, Philip Torr, Yee Whye Teh, Atilim Gunes Baydin, Tom Rainforth
We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i. e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.
3 code implementations • 8 Jul 2019 • Atılım Güneş Baydin, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, Saeid Naderiparizi, Bradley Gram-Hansen, Gilles Louppe, Mingfei Ma, Xiaohui Zhao, Philip Torr, Victor Lee, Kyle Cranmer, Prabhat, Frank Wood
Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models.
3 code implementations • NeurIPS 2019 • Atılım Güneş Baydin, Lukas Heinrich, Wahid Bhimji, Lei Shao, Saeid Naderiparizi, Andreas Munk, Jialin Liu, Bradley Gram-Hansen, Gilles Louppe, Lawrence Meadows, Philip Torr, Victor Lee, Prabhat, Kyle Cranmer, Frank Wood
We present a novel probabilistic programming framework that couples directly to existing large-scale simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way.