Search Results for author: Gabriel Kreiman

Found 33 papers, 23 papers with code

Sparse Distributed Memory is a Continual Learner

1 code implementation20 Mar 2023 Trenton Bricken, Xander Davies, Deepak Singh, Dmitry Krotov, Gabriel Kreiman

Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving.

Continual Learning

Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization

no code implementations10 Feb 2023 Ravi Srinivasan, Francesca Mignacco, Martino Sorbaro, Maria Refinetti, Avi Cooper, Gabriel Kreiman, Giorgia Dellaferrera

"Forward-only" algorithms, which train neural networks while avoiding a backward pass, have recently gained attention as a way of solving the biologically unrealistic aspects of backpropagation.

Efficient Zero-shot Visual Search via Target and Context-aware Transformer

no code implementations24 Nov 2022 Zhiwei Ding, Xuezhe Ren, Erwan David, Melissa Vo, Gabriel Kreiman, Mengmi Zhang

Target modulation is computed as patch-wise local relevance between the target and search images, whereas contextual modulation is applied in a global fashion.

Reason from Context with Self-supervised Learning

no code implementations23 Nov 2022 Xiao Liu, Ankur Sikarwar, Gabriel Kreiman, Zenglin Shi, Mengmi Zhang

To better accommodate the object-centric nature of current downstream tasks such as object recognition and detection, various methods have been proposed to suppress contextual biases or disentangle objects from contexts.

Object Object Recognition +2

Red Teaming with Mind Reading: White-Box Adversarial Policies Against RL Agents

2 code implementations5 Sep 2022 Stephen Casper, Taylor Killian, Gabriel Kreiman, Dylan Hadfield-Menell

In this work, we study white-box adversarial policies and show that having access to a target agent's internal state can be useful for identifying its vulnerabilities.

reinforcement-learning Reinforcement Learning (RL)

Improving generalization by mimicking the human visual diet

1 code implementation15 Jun 2022 Spandan Madan, You Li, Mengmi Zhang, Hanspeter Pfister, Gabriel Kreiman

We present a new perspective on bridging the generalization gap between biological and computer vision -- mimicking the human visual diet.

Domain Generalization

Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass

1 code implementation27 Jan 2022 Giorgia Dellaferrera, Gabriel Kreiman

Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer.

On the Efficacy of Co-Attention Transformer Layers in Visual Question Answering

no code implementations11 Jan 2022 Ankur Sikarwar, Gabriel Kreiman

In recent years, multi-modal transformers have shown significant progress in Vision-Language tasks, such as Visual Question Answering (VQA), outperforming previous architectures by a considerable margin.

POS Question Answering +1

Robust Feature-Level Adversaries are Interpretability Tools

2 code implementations7 Oct 2021 Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, Gabriel Kreiman

We demonstrate that they can be used to produce targeted, universal, disguised, physically-realizable, and black-box attacks at the ImageNet scale.

Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases

1 code implementation NeurIPS 2021 Shashi Kant Gupta, Mengmi Zhang, Chia-Chien Wu, Jeremy M. Wolfe, Gabriel Kreiman

To elucidate the mechanisms responsible for asymmetry in visual search, we propose a computational model that takes a target and a search image as inputs and produces a sequence of eye movements until the target is found.

What can human minimal videos tell us about dynamic recognition models?

1 code implementation19 Apr 2021 Guy Ben-Yosef, Gabriel Kreiman, Shimon Ullman

In human vision objects and their parts can be visually recognized from purely spatial or purely temporal information but the mechanisms integrating space and time are poorly understood.

Tuned Compositional Feature Replays for Efficient Stream Learning

1 code implementation6 Apr 2021 Morgan B. Talbot, Rushikesh Zawar, Rohil Badkundri, Mengmi Zhang, Gabriel Kreiman

To address the limited number of existing online stream learning datasets, we introduce 2 new benchmarks by adapting existing datasets for stream learning.

Continual Learning Image Classification +2

When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes

1 code implementation ICCV 2021 Philipp Bomatter, Mengmi Zhang, Dimitar Karev, Spandan Madan, Claire Tseng, Gabriel Kreiman

Our model captures useful information for contextual reasoning, enabling human-level performance and better robustness in out-of-context conditions compared to baseline models across OCD and other out-of-context datasets.

Object

Fooling the primate brain with minimal, targeted image manipulation

no code implementations11 Nov 2020 Li Yuan, Will Xiao, Giorgia Dellaferrera, Gabriel Kreiman, Francis E. H. Tay, Jiashi Feng, Margaret S. Livingstone

Here we propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.

Adversarial Attack Image Manipulation

What am I Searching for: Zero-shot Target Identity Inference in Visual Search

1 code implementation25 May 2020 Mengmi Zhang, Gabriel Kreiman

Using those error fixations, we developed a model (InferNet) to infer what the target was.

Can Deep Learning Recognize Subtle Human Activities?

1 code implementation CVPR 2020 Vincent Jacquot, Zhuofan Ying, Gabriel Kreiman

Deep Learning has driven recent and exciting progress in computer vision, instilling the belief that these algorithms could solve any visual task.

Action Classification

Frivolous Units: Wider Networks Are Not Really That Wide

1 code implementation10 Dec 2019 Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman

We identify two distinct types of "frivolous" units that proliferate when the network's width is increased: prunable units which can be dropped out of the network without significant change to the output and redundant units whose activities can be expressed as a linear combination of others.

Putting visual object recognition in context

1 code implementation CVPR 2020 Mengmi Zhang, Claire Tseng, Gabriel Kreiman

To model the role of contextual information in visual recognition, we systematically investigated ten critical properties of where, when, and how context modulates recognition, including the amount of context, context and object resolution, geometrical structure of context, context congruence, and temporal dynamics of contextual modulation.

Object Object Recognition

Variational Prototype Replays for Continual Learning

1 code implementation23 May 2019 Mengmi Zhang, Tao Wang, Joo Hwee Lim, Gabriel Kreiman, Jiashi Feng

In each classification task, our method learns a set of variational prototypes with their means and variances, where embedding of the samples from the same class can be represented in a prototypical distribution and class-representative prototypes are separated apart.

Continual Learning General Classification +2

Gradient-free activation maximization for identifying effective stimuli

1 code implementation1 May 2019 Will Xiao, Gabriel Kreiman

To circumvent this problem, we developed a method for gradient-free activation maximization by combining a generative neural network with a genetic algorithm.

Lift-the-flap: what, where and when for context reasoning

no code implementations1 Feb 2019 Mengmi Zhang, Claire Tseng, Karla Montejo, Joseph Kwon, Gabriel Kreiman

Context reasoning is critical in a wide variety of applications where current inputs need to be interpreted in the light of previous experience and knowledge.

General Classification Object Recognition +1

What am I Searching for: Zero-shot Target Identity Inference in Visual Search

1 code implementation31 Jul 2018 Mengmi Zhang, Gabriel Kreiman

Using those error fixations, we developed a model (InferNet) to infer what the target was.

A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception

no code implementations28 May 2018 William Lotter, Gabriel Kreiman, David Cox

Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial.

Open-Ended Question Answering Predict Future Video Frames

Learning Scene Gist with Convolutional Neural Networks to Improve Object Recognition

no code implementations6 Mar 2018 Kevin Wu, Eric Wu, Gabriel Kreiman

We use a biologically inspired two-part convolutional neural network ('GistNet') that models the fovea and periphery to provide a proof-of-principle demonstration that computational object recognition can significantly benefit from the gist of the scene as contextual information.

Object Object Recognition +1

Recurrent computations for visual pattern completion

1 code implementation7 Jun 2017 Hanlin Tang, Martin Schrimpf, Bill Lotter, Charlotte Moerman, Ana Paredes, Josue Ortega Caro, Walter Hardesty, David Cox, Gabriel Kreiman

First, subjects robustly recognized objects even when rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking.

Image Classification

On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations

1 code implementation23 Mar 2017 Nicholas Cheney, Martin Schrimpf, Gabriel Kreiman

We show that convolutional networks are surprisingly robust to a number of internal perturbations in the higher convolutional layers but the bottom convolutional layers are much more fragile.

Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning

17 code implementations25 May 2016 William Lotter, Gabriel Kreiman, David Cox

Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world.

Object Recognition Video Prediction

Unsupervised Learning of Visual Structure using Predictive Generative Networks

2 code implementations19 Nov 2015 William Lotter, Gabriel Kreiman, David Cox

The ability to predict future states of the environment is a central pillar of intelligence.

Cannot find the paper you are looking for? You can Submit a new open access paper.