Search Results for author: Rufin VanRullen

Found 27 papers, 12 papers with code

When does CLIP generalize better than unimodal models? When judging human-centric concepts

no code implementations RepL4NLP (ACL) 2022 Romain Bielawski, Benjamin Devillers, Tim Van De Cruys, Rufin VanRullen

We compare CLIP’s visual stream against two visually trained networks and CLIP’s textual stream against two linguistically trained networks, as well as multimodal combinations of these networks.

Classification Contrastive Learning +3

Modality-Agnostic fMRI Decoding of Vision and Language

no code implementations18 Mar 2024 Mitja Nikolaus, Milad Mozafari, Nicholas Asher, Leila Reddy, Rufin VanRullen

Previous studies have shown that it is possible to map brain activation data of subjects viewing images onto the feature representation space of not only vision models (modality-specific decoding) but also language models (cross-modal decoding).

Zero-shot cross-modal transfer of Reinforcement Learning policies through a Global Workspace

no code implementations7 Mar 2024 Léopold Maytié, Benjamin Devillers, Alexandre Arnold, Rufin VanRullen

First, we train a 'Global Workspace' to exploit information collected about the environment via two input modalities (a visual input, or an attribute vector representing the state of the agent and/or its environment).

Attribute Contrastive Learning +1

Leveraging Self-Supervised Instance Contrastive Learning for Radar Object Detection

no code implementations13 Feb 2024 Colin Decourt, Rufin VanRullen, Didier Salle, Thomas Oberlin

In recent years, driven by the need for safer and more autonomous transport systems, the automotive industry has shifted toward integrating a growing number of Advanced Driver Assistance Systems (ADAS).

Contrastive Learning Object +5

Gradient strikes back: How filtering out high frequencies improves explanations

no code implementations18 Jul 2023 Sabine Muzellec, Leo Andeol, Thomas Fel, Rufin VanRullen, Thomas Serre

We show that (i) removing high-frequency noise yields significant improvements in the explainability scores obtained with gradient-based methods across multiple models -- leading to (ii) a novel ranking of state-of-the-art methods with gradient-based methods at the top.

Semi-supervised Multimodal Representation Learning through a Global Workspace

1 code implementation27 Jun 2023 Benjamin Devillers, Léopold Maytié, Rufin VanRullen

Recent deep learning models can efficiently combine inputs from different modalities (e. g., images and text) and learn to align their latent representations, or to translate signals from one domain to another (as in image captioning, or text-to-image generation).

Image Captioning Representation Learning +2

Brain Captioning: Decoding human brain activity into images and text

no code implementations19 May 2023 Matteo Ferrante, Furkan Ozcelik, Tommaso Boccato, Rufin VanRullen, Nicola Toschi

Our brain captioning approach outperforms existing methods, while our image reconstruction pipeline generates plausible images with improved spatial relationships.

Brain Decoding Depth Estimation +3

Mathematical derivation of wave propagation properties in hierarchical neural networks with predictive coding feedback dynamics

no code implementations12 Apr 2023 Grégory Faye, Guilhem Fouilhé, Rufin VanRullen

Similarly, it is possible to determine in which direction, and at what speed neural activity propagates in the system.

Natural scene reconstruction from fMRI signals using generative latent diffusion

1 code implementation9 Mar 2023 Furkan Ozcelik, Rufin VanRullen

In the second stage, we use the image-to-image framework of a latent diffusion model (Versatile Diffusion) conditioned on predicted multimodal (text and visual) features, to generate final reconstructed images.

Brain Computer Interface Descriptive

A recurrent CNN for online object detection on raw radar frames

1 code implementation21 Dec 2022 Colin Decourt, Rufin VanRullen, Didier Salle, Thomas Oberlin

Exploiting the time information (e. g., multiple frames) has been shown to help to capture better the dynamics of objects and, therefore, the variation in the shape of objects.

Object object-detection +2

DAROD: A Deep Automotive Radar Object Detector on Range-Doppler maps

1 code implementation 2022 IEEE Intelligent Vehicles Symposium (IV) 2022 Colin Decourt, Rufin VanRullen, Didier Salle, Thomas Oberlin

Due to the small number of raw data automotive radar datasets and the low resolution of such radar sensors, automotive radar object detection has been little explored with deep learning models in comparison to camera and lidar-based approaches.

Object object-detection +2

Meta-Reinforcement Learning with Self-Modifying Networks

no code implementations4 Feb 2022 Mathieu Chalvidal, Thomas Serre, Rufin VanRullen

Deep Reinforcement Learning has demonstrated the potential of neural networks tuned with gradient descent for solving complex tasks in well-delimited environments.

Meta Reinforcement Learning One-Shot Learning +2

Multimodal neural networks better explain multivoxel patterns in the hippocampus

1 code implementation NeurIPS Workshop SVRHM 2021 Bhavin Choksi, Milad Mozafari, Rufin VanRullen, Leila Reddy

The human hippocampus possesses "concept cells", neurons that fire when presented with stimuli belonging to a specific concept, regardless of the modality.

Hippocampus

Understanding the computational demands underlying visual reasoning

no code implementations8 Aug 2021 Mohit Vaishnav, Remi Cadene, Andrea Alamia, Drew Linsley, Rufin VanRullen, Thomas Serre

Our analysis reveals a novel taxonomy of visual reasoning tasks, which can be primarily explained by both the type of relations (same-different vs. spatial-relation judgments) and the number of relations used to compose the underlying rules.

Visual Reasoning

On the role of feedback in visual processing: a predictive coding perspective

1 code implementation8 Jun 2021 Andrea Alamia, Milad Mozafari, Bhavin Choksi, Rufin VanRullen

That is, we let the optimization process determine whether top-down connections and predictive coding dynamics are functionally beneficial.

BIG-bench Machine Learning Object Recognition

Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics

2 code implementations NeurIPS 2021 Bhavin Choksi, Milad Mozafari, Callum Biggs O'May, Benjamin Ador, Andrea Alamia, Rufin VanRullen

The reconstruction errors are used to iteratively update the network's representations across timesteps, and to optimize the network's feedback weights over the natural image dataset-a form of unsupervised training.

Image Classification

GAttANet: Global attention agreement for convolutional neural networks

no code implementations12 Apr 2021 Rufin VanRullen, Andrea Alamia

We demonstrate the usefulness of this brain-inspired Global Attention Agreement network (GAttANet) for various convolutional backbones (from a simple 5-layer toy model to a standard ResNet50 architecture) and datasets (CIFAR10, CIFAR100, Imagenet-1k).

Predictive coding feedback results in perceived illusory contours in a recurrent neural network

2 code implementations NeurIPS Workshop SVRHM 2020 Zhaoyang Pang, Callum Biggs O'May, Bhavin Choksi, Rufin VanRullen

Finally we validated our conclusions in a deeper network (VGG): adding the same predictive coding feedback dynamics again leads to the perception of illusory contours.

Deep Learning and the Global Workspace Theory

no code implementations4 Dec 2020 Rufin VanRullen, Ryota Kanai

Recent advances in deep learning have allowed Artificial Intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks.

Translation

Brain-inspired predictive coding dynamics improve the robustness of deep neural networks

1 code implementation NeurIPS Workshop SVRHM 2020 Bhavin Choksi, Milad Mozafari, Callum Biggs O'May, B. ADOR, Andrea Alamia, Rufin VanRullen

The reconstruction errors are used to iteratively update the network’s representations across timesteps, and to optimize the network's feedback weights over the natural image dataset--a form of unsupervised training.

Image Classification

Go with the Flow: Adaptive Control for Neural ODEs

no code implementations ICLR 2021 Mathieu Chalvidal, Matthew Ricci, Rufin VanRullen, Thomas Serre

Despite their elegant formulation and lightweight memory cost, neural ordinary differential equations (NODEs) suffer from known representational limitations.

Image Reconstruction Representation Learning

Reconstructing Natural Scenes from fMRI Patterns using BigBiGAN

no code implementations31 Jan 2020 Milad Mozafari, Leila Reddy, Rufin VanRullen

Then, we applied this mapping to the fMRI activity patterns obtained from 50 new test images from 50 unseen categories in order to retrieve their latent vectors, and reconstruct the corresponding images.

Attribute Generative Adversarial Network +1

Which Neural Network Architecture matches Human Behavior in Artificial Grammar Learning?

no code implementations13 Feb 2019 Andrea Alamia, Victor Gauducheau, Dimitri Paisios, Rufin VanRullen

Our results show that both architectures can 'learn' (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level.

Neurons and Cognition Human-Computer Interaction

Reconstructing Faces from fMRI Patterns using Deep Generative Neural Networks

1 code implementation9 Oct 2018 Rufin VanRullen, Leila Reddy

While objects from different categories can be reliably decoded from fMRI brain response patterns, it has proved more difficult to distinguish visually similar inputs, such as different instances of the same category.

Human-Computer Interaction Neurons and Cognition

Cannot find the paper you are looking for? You can Submit a new open access paper.