Search Results for author: Xaq Pitkow

Found 20 papers, 6 papers with code

Generalization properties of contrastive world models

no code implementations29 Dec 2023 Kandan Ramakrishnan, R. James Cotton, Xaq Pitkow, Andreas S. Tolias

We systematically test the model under a number of different OOD generalization scenarios such as extrapolation to new object attributes, introducing new conjunctions or new attributes.

Object

Inferring Inference

1 code implementation4 Oct 2023 Rajkumar Vasudeva Raju, Zhe Li, Scott Linderman, Xaq Pitkow

Given a time series of neural activity during a perceptual inference task, our framework finds (i) the neural representation of relevant latent variables, (ii) interactions between these variables that define the brain's internal model of the world, and (iii) message-functions specifying the inference algorithm.

Experimental Design

Understanding robustness and generalization of artificial neural networks through Fourier masks

1 code implementation16 Mar 2022 Nikos Karantzas, Emma Besier, Josue Ortega Caro, Xaq Pitkow, Andreas S. Tolias, Ankit B. Patel, Fabio Anselmi

Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place.

Data Augmentation

Learning Dynamics and Structure of Complex Systems Using Graph Neural Networks

no code implementations22 Feb 2022 Zhe Li, Andreas S. Tolias, Xaq Pitkow

In this work we trained graph neural networks to fit time series from an example nonlinear dynamical system, the belief propagation algorithm.

Inductive Bias Time Series +1

Interpolating between sampling and variational inference with infinite stochastic mixtures

2 code implementations18 Oct 2021 Richard D. Lange, Ari Benjamin, Ralf M. Haefner, Xaq Pitkow

This work takes a step towards a highly flexible yet simple family of inference methods that combines the complementary strengths of sampling and VI.

Variational Inference

Phase transitions in when feedback is useful

no code implementations15 Oct 2021 Lokesh Boominathan, Xaq Pitkow

We show that there is a non-monotonic dependence of optimal feedback gain as a function of both the computational parameters and the world dynamics, leading to phase transitions in whether feedback provides any utility in optimal inference under computational constraints.

Two-argument activation functions learn soft XOR operations like cortical neurons

no code implementations13 Oct 2021 KiJung Yoon, Emin Orhan, Juhyun Kim, Xaq Pitkow

Neurons in the brain are complex machines with distinct functional compartments that interact nonlinearly.

Vocal Bursts Valence Prediction

Generalization of graph network inferences in higher-order graphical models

no code implementations12 Jul 2021 Yicheng Fei, Xaq Pitkow

A major challenge for these graphical models is that inferences such as marginalization are intractable for general graphs.

Out-of-Distribution Generalization

Exploring representation learning for flexible few-shot tasks

no code implementations1 Jan 2021 Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard Zemel

In this work, we consider a realistic setting where the relationship between examples can change from episode to episode depending on the task context, which is not given to the learner.

Few-Shot Learning Representation Learning

Probing Few-Shot Generalization with Attributes

no code implementations10 Dec 2020 Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard Zemel

Despite impressive progress in deep learning, generalizing far beyond the training distribution is an important open challenge.

Attribute Few-Shot Learning +1

Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics

no code implementations NeurIPS 2020 Saurabh Daptardar, Paul Schrater, Xaq Pitkow

This approach provides a foundation for interpreting the behavioral and neural dynamics of highly adapted controllers in animal brains.

Continuous Control

Improved memory in recurrent neural networks with sequential non-normal dynamics

1 code implementation ICLR 2020 A. Emin Orhan, Xaq Pitkow

In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks.

Belief dynamics extraction

no code implementations2 Feb 2019 Arun Kumar, Zhengwei Wu, Xaq Pitkow, Paul Schrater

Estimating the structure of these internal states is crucial for understanding the neural basis of behavior.

Model-based Reinforcement Learning

Inverse Rational Control: Inferring What You Think from How You Forage

no code implementations24 May 2018 Zhengwei Wu, Paul Schrater, Xaq Pitkow

Complex behaviors are often driven by an internal model, which integrates sensory information over time and facilitates long-term planning.

Imitation Learning

Inference in Probabilistic Graphical Models by Graph Neural Networks

1 code implementation21 Mar 2018 KiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel, Xaq Pitkow

Message-passing algorithms, such as belief propagation, are a natural way to disseminate evidence amongst correlated variables while exploiting the graph structure, but these algorithms can struggle when the conditional dependency graphs contain loops.

Decision Making

Reviving and Improving Recurrent Back-Propagation

1 code implementation ICML 2018 Renjie Liao, Yuwen Xiong, Ethan Fetaya, Lisa Zhang, KiJung Yoon, Xaq Pitkow, Raquel Urtasun, Richard Zemel

We examine all RBP variants along with BPTT and TBPTT in three different application domains: associative memory with continuous Hopfield networks, document classification in citation networks using graph neural networks and hyperparameter optimization for fully connected networks.

Document Classification Hyperparameter Optimization

Skip Connections Eliminate Singularities

no code implementations ICLR 2018 A. Emin Orhan, Xaq Pitkow

Here, we present a novel explanation for the benefits of skip connections in training very deep networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.