no code implementations • 7 May 2024 • Jonathan Wilder Lavington, Ke Zhang, Vasileios Lioutas, Matthew Niedoba, Yunpeng Liu, Dylan Green, Saeid Naderiparizi, Xiaoxuan Liang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Moreover, because of the high variability between different problems presented in different autonomous systems, these simulators need to be easy to use, and easy to modify.
1 code implementation • 12 Feb 2024 • Matthew Niedoba, Dylan Green, Saeid Naderiparizi, Vasileios Lioutas, Jonathan Wilder Lavington, Xiaoxuan Liang, Yunpeng Liu, Ke Zhang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Score function estimation is the cornerstone of both training and sampling from diffusion generative models.
no code implementations • 3 Jul 2023 • Amrutha Varshini Ramesh, Aaron Mishkin, Mark Schmidt, Yihan Zhou, Jonathan Wilder Lavington, Jennifer She
We show that bound- and summation-constrained steepest descent in the L1-norm guarantees more progress per iteration than previous rules and can be computed in only $O(n \log n)$ time.
1 code implementation • 24 May 2023 • Setareh Dabiri, Vasileios Lioutas, Berend Zwartsenberg, Yunpeng Liu, Matthew Niedoba, Xiaoxuan Liang, Dylan Green, Justice Sefas, Jonathan Wilder Lavington, Frank Wood, Adam Scibior
When training object detection models on synthetic data, it is important to make the distribution of synthetic data as close as possible to the distribution of real data.
no code implementations • 19 May 2023 • Yunpeng Liu, Vasileios Lioutas, Jonathan Wilder Lavington, Matthew Niedoba, Justice Sefas, Setareh Dabiri, Dylan Green, Xiaoxuan Liang, Berend Zwartsenberg, Adam Ścibior, Frank Wood
The development of algorithms that learn multi-agent behavioral models using human demonstrations has led to increasingly realistic simulations in the field of autonomous driving.
1 code implementation • 27 Apr 2023 • Frederik Kunstner, Jacques Chen, Jonathan Wilder Lavington, Mark Schmidt
This suggests that Adam outperform SGD because it uses a more robust gradient estimate.
1 code implementation • 6 Feb 2023 • Jonathan Wilder Lavington, Sharan Vaswani, Reza Babanezhad, Mark Schmidt, Nicolas Le Roux
Our target optimization framework uses the (expensive) gradient computation to construct surrogate functions in a \emph{target space} (e. g. the logits output by a linear model for classification) that can be minimized efficiently.
no code implementations • 9 Aug 2022 • Yunpeng Liu, Jonathan Wilder Lavington, Adam Scibior, Frank Wood
We develop a generic mechanism for generating vehicle-type specific sequences of waypoints from a probabilistic foundation model of driving behavior.
1 code implementation • 29 Jul 2022 • Jonathan Wilder Lavington, Sharan Vaswani, Mark Schmidt
Specifically, if the class of policies is sufficiently expressive to contain the expert policy, we prove that DAGGER achieves constant regret.
no code implementations • 17 Jun 2022 • Berend Zwartsenberg, Adam Ścibior, Matthew Niedoba, Vasileios Lioutas, Yunpeng Liu, Justice Sefas, Setareh Dabiri, Jonathan Wilder Lavington, Trevor Campbell, Frank Wood
We present a novel, conditional generative probabilistic model of set-valued data with a tractable log density.
no code implementations • 30 May 2022 • Vasileios Lioutas, Jonathan Wilder Lavington, Justice Sefas, Matthew Niedoba, Yunpeng Liu, Berend Zwartsenberg, Setareh Dabiri, Frank Wood, Adam Scibior
We introduce CriticSMC, a new algorithm for planning as inference built from a composition of sequential Monte Carlo with learned Soft-Q function heuristic factors.