Search Results for author: Andrea Alamia

Found 6 papers, 3 papers with code

Understanding the computational demands underlying visual reasoning

no code implementations8 Aug 2021 Mohit Vaishnav, Remi Cadene, Andrea Alamia, Drew Linsley, Rufin VanRullen, Thomas Serre

Our analysis reveals a novel taxonomy of visual reasoning tasks, which can be primarily explained by both the type of relations (same-different vs. spatial-relation judgments) and the number of relations used to compose the underlying rules.

Visual Reasoning

On the role of feedback in visual processing: a predictive coding perspective

1 code implementation8 Jun 2021 Andrea Alamia, Milad Mozafari, Bhavin Choksi, Rufin VanRullen

That is, we let the optimization process determine whether top-down connections and predictive coding dynamics are functionally beneficial.

BIG-bench Machine Learning Object Recognition

Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics

2 code implementations NeurIPS 2021 Bhavin Choksi, Milad Mozafari, Callum Biggs O'May, Benjamin Ador, Andrea Alamia, Rufin VanRullen

The reconstruction errors are used to iteratively update the network's representations across timesteps, and to optimize the network's feedback weights over the natural image dataset-a form of unsupervised training.

Image Classification

GAttANet: Global attention agreement for convolutional neural networks

no code implementations12 Apr 2021 Rufin VanRullen, Andrea Alamia

We demonstrate the usefulness of this brain-inspired Global Attention Agreement network (GAttANet) for various convolutional backbones (from a simple 5-layer toy model to a standard ResNet50 architecture) and datasets (CIFAR10, CIFAR100, Imagenet-1k).

Brain-inspired predictive coding dynamics improve the robustness of deep neural networks

1 code implementation NeurIPS Workshop SVRHM 2020 Bhavin Choksi, Milad Mozafari, Callum Biggs O'May, B. ADOR, Andrea Alamia, Rufin VanRullen

The reconstruction errors are used to iteratively update the network’s representations across timesteps, and to optimize the network's feedback weights over the natural image dataset--a form of unsupervised training.

Image Classification

Which Neural Network Architecture matches Human Behavior in Artificial Grammar Learning?

no code implementations13 Feb 2019 Andrea Alamia, Victor Gauducheau, Dimitri Paisios, Rufin VanRullen

Our results show that both architectures can 'learn' (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level.

Neurons and Cognition Human-Computer Interaction

Cannot find the paper you are looking for? You can Submit a new open access paper.