Search Results for author: Dileep George

Found 27 papers, 8 papers with code

What type of inference is planning?

no code implementations25 Jun 2024 Miguel Lázaro-Gredilla, Li Yang Ku, Kevin P. Murphy, Dileep George

Multiple types of inference are available for probabilistic graphical models, e. g., marginal, maximum-a-posteriori, and even marginal maximum-a-posteriori.

Variational Inference

Learning Cognitive Maps from Transformer Representations for Efficient Planning in Partially Observed Environments

no code implementations11 Jan 2024 Antoine Dedieu, Wolfgang Lehrach, Guangyao Zhou, Dileep George, Miguel Lázaro-Gredilla

Despite their stellar performance on a wide range of tasks, including in-context tasks only revealed during inference, vanilla transformers and variants trained for next-token predictions (a) do not learn an explicit world model of their environment which can be flexibly queried and (b) cannot be used for planning or navigation.

Neuroscience needs Network Science

no code implementations10 May 2023 Dániel L Barabási, Ginestra Bianconi, Ed Bullmore, Mark Burgess, SueYeon Chung, Tina Eliassi-Rad, Dileep George, István A. Kovács, Hernán Makse, Christos Papadimitriou, Thomas E. Nichols, Olaf Sporns, Kim Stachenfeld, Zoltán Toroczkai, Emma K. Towlson, Anthony M Zador, Hongkui Zeng, Albert-László Barabási, Amy Bernard, György Buzsáki

We explore the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease, and discuss the potential for collaboration between network science and neuroscience communities.

Fast exploration and learning of latent graphs with aliased observations

no code implementations13 Mar 2023 Miguel Lazaro-Gredilla, Ishan Deshpande, Sivaramakrishnan Swaminathan, Meet Dave, Dileep George

We consider the problem of recovering a latent graph where the observations at each node are \emph{aliased}, and transitions are stochastic.

Efficient Exploration

Learning noisy-OR Bayesian Networks with Max-Product Belief Propagation

no code implementations31 Jan 2023 Antoine Dedieu, Guangyao Zhou, Dileep George, Miguel Lazaro-Gredilla

We evaluate both approaches on several benchmarks where VI is the state-of-the-art and show that our method (a) achieves better test performance than Ji et al. (2020) for learning noisy-OR BNs with hierarchical latent structures on large sparse real datasets; (b) recovers a higher number of ground truth parameters than Buhai et al. (2020) from cluttered synthetic scenes; and (c) solves the 2D blind deconvolution problem from Lazaro-Gredilla et al. (2021) and variant - including binary matrix factorization - while VI catastrophically fails and is up to two orders of magnitude slower.

Variational Inference

Space is a latent sequence: Structured sequence learning as a unified theory of representation in the hippocampus

no code implementations3 Dec 2022 Rajkumar Vasudeva Raju, J. Swaroop Guntupalli, Guangyao Zhou, Miguel Lázaro-Gredilla, Dileep George

Fascinating and puzzling phenomena, such as landmark vector cells, splitter cells, and event-specific representations to name a few, are regularly discovered in the hippocampus.


PGMax: Factor Graphs for Discrete Probabilistic Graphical Models and Loopy Belief Propagation in JAX

2 code implementations8 Feb 2022 Guangyao Zhou, Antoine Dedieu, Nishanth Kumar, Wolfgang Lehrach, Miguel Lázaro-Gredilla, Shrinu Kushagra, Dileep George

PGMax is an open-source Python package for (a) easily specifying discrete Probabilistic Graphical Models (PGMs) as factor graphs; and (b) automatically running efficient and scalable loopy belief propagation (LBP) in JAX.

Graphical Models with Attention for Context-Specific Independence and an Application to Perceptual Grouping

1 code implementation6 Dec 2021 Guangyao Zhou, Wolfgang Lehrach, Antoine Dedieu, Miguel Lázaro-Gredilla, Dileep George

To demonstrate MAM's capabilities to capture CSIs at scale, we apply MAMs to capture an important type of CSI that is present in a symbolic approach to recurrent computations in perceptual grouping.

Perturb-and-max-product: Sampling and learning in discrete energy-based models

1 code implementation NeurIPS 2021 Miguel Lazaro-Gredilla, Antoine Dedieu, Dileep George

Perturb-and-MAP offers an elegant approach to approximately sample from a energy-based model (EBM) by computing the maximum-a-posteriori (MAP) configuration of a perturbed version of the model.

Sample-Efficient L0-L2 Constrained Structure Learning of Sparse Ising Models

1 code implementation3 Dec 2020 Antoine Dedieu, Miguel Lázaro-Gredilla, Dileep George

We consider the problem of learning the underlying graph of a sparse Ising model with $p$ nodes from $n$ i. i. d.

Query Training: Learning a Worse Model to Infer Better Marginals in Undirected Graphical Models with Hidden Variables

1 code implementation11 Jun 2020 Miguel Lázaro-Gredilla, Wolfgang Lehrach, Nishad Gothoskar, Guangyao Zhou, Antoine Dedieu, Dileep George

Here we introduce query training (QT), a mechanism to learn a PGM that is optimized for the approximate inference algorithm that will be paired with it.

From proprioception to long-horizon planning in novel environments: A hierarchical RL model

no code implementations11 Jun 2020 Nishad Gothoskar, Miguel Lázaro-Gredilla, Dileep George

For an intelligent agent to flexibly and efficiently operate in complex environments, they must be able to reason at multiple levels of temporal, spatial, and conceptual abstraction.

Efficient Exploration Model Predictive Control

Learning a generative model for robot control using visual feedback

no code implementations10 Mar 2020 Nishad Gothoskar, Miguel Lázaro-Gredilla, Abhishek Agarwal, Yasemin Bekiroglu, Dileep George

Our method can handle noise in the observed state and noise in the controllers that we interact with.

A Model of Fast Concept Inference with Object-Factorized Cognitive Programs

no code implementations10 Feb 2020 Daniel P. Sawyer, Miguel Lázaro-Gredilla, Dileep George

The ability of humans to quickly identify general concepts from a handful of images has proven difficult to emulate with robots.

Learning undirected models via query training

no code implementations pproximateinference AABI Symposium 2019 Miguel Lazaro-Gredilla, Wolfgang Lehrach, Dileep George

We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference.


What can the brain teach us about building artificial intelligence?

no code implementations4 Sep 2019 Dileep George

This paper is the preprint of an invited commentary on Lake et al's Behavioral and Brain Sciences article titled "Building machines that learn and think like people".

Learning higher-order sequential structure with cloned HMMs

no code implementations1 May 2019 Antoine Dedieu, Nishad Gothoskar, Scott Swingle, Wolfgang Lehrach, Miguel Lázaro-Gredilla, Dileep George

We show that by constraining HMMs with a simple sparsity structure inspired by biology, we can make it learn variable order sequences efficiently.

Community Detection Language Modelling

Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs

no code implementations6 Dec 2018 Miguel Lázaro-Gredilla, Dianhuan Lin, J. Swaroop Guntupalli, Dileep George

Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams.

Novel Concepts

Cortical Microcircuits from a Generative Vision Model

no code implementations3 Aug 2018 Dileep George, Alexander Lavin, J. Swaroop Guntupalli, David Mely, Nick Hay, Miguel Lazaro-Gredilla

Understanding the information processing roles of cortical circuits is an outstanding problem in neuroscience and artificial intelligence.

Bayesian Inference

Teaching Compositionality to CNNs

no code implementations CVPR 2017 Austin Stone, Huayan Wang, Michael Stark, Yi Liu, D. Scott Phoenix, Dileep George

Convolutional neural networks (CNNs) have shown great success in computer vision, approaching human-level performance when trained for specific tasks via application-specific loss functions.

Object Recognition

Generative Shape Models: Joint Text Recognition and Segmentation with Very Little Training Data

no code implementations NeurIPS 2016 Xinghua Lou, Ken Kansky, Wolfgang Lehrach, CC Laan, Bhaskara Marthi, D. Scott Phoenix, Dileep George

We demonstrate that a generative model for object shapes can achieve state of the art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods.

Instance Segmentation Scene Text Recognition +1

A backward pass through a CNN using a generative model of its activations

no code implementations8 Nov 2016 Huayan Wang, Anna Chen, Yi Liu, Dileep George, D. Scott Phoenix

Neural networks have shown to be a practical way of building a very complex mapping between a pre-specified input space and output space.


Hierarchical compositional feature learning

no code implementations7 Nov 2016 Miguel Lázaro-Gredilla, Yi Liu, D. Scott Phoenix, Dileep George

We introduce the hierarchical compositional network (HCN), a directed generative model able to discover and disentangle, without supervision, the building blocks of a set of binary images.

Cannot find the paper you are looking for? You can Submit a new open access paper.