Search Results for author: Miguel Lázaro-Gredilla

Found 17 papers, 5 papers with code

PGMax: Factor Graphs for Discrete Probabilistic Graphical Models and Loopy Belief Propagation in JAX

1 code implementation8 Feb 2022 Guangyao Zhou, Nishanth Kumar, Antoine Dedieu, Miguel Lázaro-Gredilla, Shrinu Kushagra, Dileep George

PGMax is an open-source Python package for easy specification of discrete Probabilistic Graphical Models (PGMs) as factor graphs, and automatic derivation of efficient and scalable loopy belief propagation (LBP) implementation in JAX.

Graphical Models with Attention for Context-Specific Independence and an Application to Perceptual Grouping

1 code implementation6 Dec 2021 Guangyao Zhou, Wolfgang Lehrach, Antoine Dedieu, Miguel Lázaro-Gredilla, Dileep George

To demonstrate MAM's capabilities to capture CSIs at scale, we apply MAMs to capture an important type of CSI that is present in a symbolic approach to recurrent computations in perceptual grouping.

Sample-Efficient L0-L2 Constrained Structure Learning of Sparse Ising Models

1 code implementation3 Dec 2020 Antoine Dedieu, Miguel Lázaro-Gredilla, Dileep George

We consider the problem of learning the underlying graph of a sparse Ising model with $p$ nodes from $n$ i. i. d.

Query Training: Learning a Worse Model to Infer Better Marginals in Undirected Graphical Models with Hidden Variables

1 code implementation11 Jun 2020 Miguel Lázaro-Gredilla, Wolfgang Lehrach, Nishad Gothoskar, Guangyao Zhou, Antoine Dedieu, Dileep George

Here we introduce query training (QT), a mechanism to learn a PGM that is optimized for the approximate inference algorithm that will be paired with it.

From proprioception to long-horizon planning in novel environments: A hierarchical RL model

no code implementations11 Jun 2020 Nishad Gothoskar, Miguel Lázaro-Gredilla, Dileep George

For an intelligent agent to flexibly and efficiently operate in complex environments, they must be able to reason at multiple levels of temporal, spatial, and conceptual abstraction.

Efficient Exploration

Learning a generative model for robot control using visual feedback

no code implementations10 Mar 2020 Nishad Gothoskar, Miguel Lázaro-Gredilla, Abhishek Agarwal, Yasemin Bekiroglu, Dileep George

Our method can handle noise in the observed state and noise in the controllers that we interact with.

A Model of Fast Concept Inference with Object-Factorized Cognitive Programs

no code implementations10 Feb 2020 Daniel P. Sawyer, Miguel Lázaro-Gredilla, Dileep George

The ability of humans to quickly identify general concepts from a handful of images has proven difficult to emulate with robots.

Learning higher-order sequential structure with cloned HMMs

no code implementations1 May 2019 Antoine Dedieu, Nishad Gothoskar, Scott Swingle, Wolfgang Lehrach, Miguel Lázaro-Gredilla, Dileep George

We show that by constraining HMMs with a simple sparsity structure inspired by biology, we can make it learn variable order sequences efficiently.

Community Detection Language Modelling

Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs

no code implementations6 Dec 2018 Miguel Lázaro-Gredilla, Dianhuan Lin, J. Swaroop Guntupalli, Dileep George

Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams.

Hierarchical compositional feature learning

no code implementations7 Nov 2016 Miguel Lázaro-Gredilla, Yi Liu, D. Scott Phoenix, Dileep George

We introduce the hierarchical compositional network (HCN), a directed generative model able to discover and disentangle, without supervision, the building blocks of a set of binary images.

Local Expectation Gradients for Black Box Variational Inference

no code implementations NeurIPS 2015 Michalis Titsias Rc Aueb, Miguel Lázaro-Gredilla

This algorithm divides the problem of estimating the stochastic gradients over multiple variational parameters into smaller sub-tasks so that each sub-task explores intelligently the most relevant part of the variational distribution.

Variational Inference

Gaussian Processes for Nonlinear Signal Processing

no code implementations12 Mar 2013 Fernando Pérez-Cruz, Steven Van Vaerenbergh, Juan José Murillo-Fuentes, Miguel Lázaro-Gredilla, Ignacio Santamaria

Gaussian processes (GPs) are versatile tools that have been successfully employed to solve nonlinear estimation problems in machine learning, but that are rarely used in signal processing.

Gaussian Processes General Classification

Bayesian Warped Gaussian Processes

no code implementations NeurIPS 2012 Miguel Lázaro-Gredilla

The use of this nonlinear transformation, which is included as part of the probabilistic model, was shown to enhance performance by providing a better prior model on several data sets.

Gaussian Processes General Classification

Overlapping Mixtures of Gaussian Processes for the Data Association Problem

no code implementations16 Aug 2011 Miguel Lázaro-Gredilla, Steven Van Vaerenbergh, Neil Lawrence

In this work we introduce a mixture of GPs to address the data association problem, i. e. to label a group of observations according to the sources that generated them.

Gaussian Processes Multi-Object Tracking

Inter-domain Gaussian Processes for Sparse Inference using Inducing Features

no code implementations NeurIPS 2009 Miguel Lázaro-Gredilla, Aníbal Figueiras-Vidal

The state-of-the-art sparse GP model introduced by Snelson and Ghahramani in [1] relies on finding a small, representative pseudo data set of m elements (from the same domain as the n available data elements) which is able to explain existing data well, and then uses it to perform inference.

Gaussian Processes Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.