no code implementations • NeurIPS 2009 • Miguel Lázaro-Gredilla, Aníbal Figueiras-Vidal
The state-of-the-art sparse GP model introduced by Snelson and Ghahramani in [1] relies on finding a small, representative pseudo data set of m elements (from the same domain as the n available data elements) which is able to explain existing data well, and then uses it to perform inference.
no code implementations • 16 Aug 2011 • Miguel Lázaro-Gredilla, Steven Van Vaerenbergh, Neil Lawrence
In this work we introduce a mixture of GPs to address the data association problem, i. e. to label a group of observations according to the sources that generated them.
no code implementations • NeurIPS 2011 • Michalis K. Titsias, Miguel Lázaro-Gredilla
We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models.
no code implementations • NeurIPS 2012 • Miguel Lázaro-Gredilla
The use of this nonlinear transformation, which is included as part of the probabilistic model, was shown to enhance performance by providing a better prior model on several data sets.
no code implementations • 12 Mar 2013 • Fernando Pérez-Cruz, Steven Van Vaerenbergh, Juan José Murillo-Fuentes, Miguel Lázaro-Gredilla, Ignacio Santamaria
Gaussian processes (GPs) are versatile tools that have been successfully employed to solve nonlinear estimation problems in machine learning, but that are rarely used in signal processing.
no code implementations • NeurIPS 2015 • Michalis Titsias Rc Aueb, Miguel Lázaro-Gredilla
This algorithm divides the problem of estimating the stochastic gradients over multiple variational parameters into smaller sub-tasks so that each sub-task explores intelligently the most relevant part of the variational distribution.
no code implementations • 7 Nov 2016 • Miguel Lázaro-Gredilla, Yi Liu, D. Scott Phoenix, Dileep George
We introduce the hierarchical compositional network (HCN), a directed generative model able to discover and disentangle, without supervision, the building blocks of a set of binary images.
2 code implementations • ICML 2017 • Ken Kansky, Tom Silver, David A. Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, Dileep George
The recent adaptation of deep neural network-based methods to reinforcement learning and planning domains has yielded remarkable progress on individual tasks.
no code implementations • 6 Dec 2018 • Miguel Lázaro-Gredilla, Dianhuan Lin, J. Swaroop Guntupalli, Dileep George
Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams.
no code implementations • 1 May 2019 • Antoine Dedieu, Nishad Gothoskar, Scott Swingle, Wolfgang Lehrach, Miguel Lázaro-Gredilla, Dileep George
We show that by constraining HMMs with a simple sparsity structure inspired by biology, we can make it learn variable order sequences efficiently.
no code implementations • 10 Feb 2020 • Daniel P. Sawyer, Miguel Lázaro-Gredilla, Dileep George
The ability of humans to quickly identify general concepts from a handful of images has proven difficult to emulate with robots.
no code implementations • 10 Mar 2020 • Nishad Gothoskar, Miguel Lázaro-Gredilla, Abhishek Agarwal, Yasemin Bekiroglu, Dileep George
Our method can handle noise in the observed state and noise in the controllers that we interact with.
no code implementations • 11 Jun 2020 • Nishad Gothoskar, Miguel Lázaro-Gredilla, Dileep George
For an intelligent agent to flexibly and efficiently operate in complex environments, they must be able to reason at multiple levels of temporal, spatial, and conceptual abstraction.
1 code implementation • 11 Jun 2020 • Miguel Lázaro-Gredilla, Wolfgang Lehrach, Nishad Gothoskar, Guangyao Zhou, Antoine Dedieu, Dileep George
Here we introduce query training (QT), a mechanism to learn a PGM that is optimized for the approximate inference algorithm that will be paired with it.
1 code implementation • 3 Dec 2020 • Antoine Dedieu, Miguel Lázaro-Gredilla, Dileep George
We consider the problem of learning the underlying graph of a sparse Ising model with $p$ nodes from $n$ i. i. d.
1 code implementation • 6 Dec 2021 • Guangyao Zhou, Wolfgang Lehrach, Antoine Dedieu, Miguel Lázaro-Gredilla, Dileep George
To demonstrate MAM's capabilities to capture CSIs at scale, we apply MAMs to capture an important type of CSI that is present in a symbolic approach to recurrent computations in perceptual grouping.
2 code implementations • 8 Feb 2022 • Guangyao Zhou, Antoine Dedieu, Nishanth Kumar, Wolfgang Lehrach, Miguel Lázaro-Gredilla, Shrinu Kushagra, Dileep George
PGMax is an open-source Python package for (a) easily specifying discrete Probabilistic Graphical Models (PGMs) as factor graphs; and (b) automatically running efficient and scalable loopy belief propagation (LBP) in JAX.
no code implementations • 3 Dec 2022 • Rajkumar Vasudeva Raju, J. Swaroop Guntupalli, Guangyao Zhou, Miguel Lázaro-Gredilla, Dileep George
Fascinating and puzzling phenomena, such as landmark vector cells, splitter cells, and event-specific representations to name a few, are regularly discovered in the hippocampus.
1 code implementation • ICCV 2023 • Guangyao Zhou, Nishad Gothoskar, Lirui Wang, Joshua B. Tenenbaum, Dan Gutfreund, Miguel Lázaro-Gredilla, Dileep George, Vikash K. Mansinghka
In this paper, we introduce probabilistic modeling to the inverse graphics framework to quantify uncertainty and achieve robustness in 6D pose estimation tasks.
Ranked #1 on on YCB-Video
no code implementations • 14 Feb 2023 • J. Swaroop Guntupalli, Rajkumar Vasudeva Raju, Shrinu Kushagra, Carter Wendelken, Danny Sawyer, Ishan Deshpande, Guangyao Zhou, Miguel Lázaro-Gredilla, Dileep George
Graph schemas can be learned in far fewer episodes than previous baselines, and can model and plan in a few steps in novel variations of these tasks.
no code implementations • 11 Jan 2024 • Antoine Dedieu, Wolfgang Lehrach, Guangyao Zhou, Dileep George, Miguel Lázaro-Gredilla
Despite their stellar performance on a wide range of tasks, including in-context tasks only revealed during inference, vanilla transformers and variants trained for next-token predictions (a) do not learn an explicit world model of their environment which can be flexibly queried and (b) cannot be used for planning or navigation.