no code implementations • 28 Jun 2024 • Xinghua Lou, Meet Dave, Shrinu Kushagra, Miguel Lazaro-Gredilla, Kevin Murphy
The transformer baseline is based on the MTR model, which predicts multiple future trajectories conditioned on the past trajectories and static road layout features.
no code implementations • 13 Mar 2023 • Miguel Lazaro-Gredilla, Ishan Deshpande, Sivaramakrishnan Swaminathan, Meet Dave, Dileep George
We consider the problem of recovering a latent graph where the observations at each node are \emph{aliased}, and transitions are stochastic.
no code implementations • 31 Jan 2023 • Antoine Dedieu, Guangyao Zhou, Dileep George, Miguel Lazaro-Gredilla
We evaluate both approaches on several benchmarks where VI is the state-of-the-art and show that our method (a) achieves better test performance than Ji et al. (2020) for learning noisy-OR BNs with hierarchical latent structures on large sparse real datasets; (b) recovers a higher number of ground truth parameters than Buhai et al. (2020) from cluttered synthetic scenes; and (c) solves the 2D blind deconvolution problem from Lazaro-Gredilla et al. (2021) and variant - including binary matrix factorization - while VI catastrophically fails and is up to two orders of magnitude slower.
1 code implementation • 24 Jan 2023 • Ken Kansky, Skanda Vaidyanath, Scott Swingle, Xinghua Lou, Miguel Lazaro-Gredilla, Dileep George
We provide a benchmark of more than 200 PushWorld puzzles in PDDL and in an OpenAI Gym environment.
1 code implementation • NeurIPS 2021 • Miguel Lazaro-Gredilla, Antoine Dedieu, Dileep George
Perturb-and-MAP offers an elegant approach to approximately sample from a energy-based model (EBM) by computing the maximum-a-posteriori (MAP) configuration of a perturbed version of the model.
no code implementations • pproximateinference AABI Symposium 2019 • Miguel Lazaro-Gredilla, Wolfgang Lehrach, Dileep George
We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference.
no code implementations • 3 Aug 2018 • Dileep George, Alexander Lavin, J. Swaroop Guntupalli, David Mely, Nick Hay, Miguel Lazaro-Gredilla
Understanding the information processing roles of cortical circuits is an outstanding problem in neuroscience and artificial intelligence.
no code implementations • 5 Apr 2018 • Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredilla, Dale Schuurmans, Stefano Ermon
Learning latent variable models with stochastic variational inference is challenging when the approximate posterior is far from the true posterior, due to high variance in the gradient estimates.
no code implementations • NeurIPS 2013 • Michalis Titsias Rc Aueb, Miguel Lazaro-Gredilla
We introduce a novel variational method that allows to approximately integrate out kernel hyperparameters, such as length-scales, in Gaussian process regression.