Search Results for author: James J. DiCarlo

Found 17 papers, 10 papers with code

How does the primate brain combine generative and discriminative computations in vision?

no code implementations11 Jan 2024 Benjamin Peters, James J. DiCarlo, Todd Gureckis, Ralf Haefner, Leyla Isik, Joshua Tenenbaum, Talia Konkle, Thomas Naselaris, Kimberly Stachenfeld, Zenna Tavares, Doris Tsao, Ilker Yildirim, Nikolaus Kriegeskorte

The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it.

Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds

no code implementations21 Dec 2023 Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung

Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches.

Robustified ANNs Reveal Wormholes Between Human Category Percepts

1 code implementation14 Aug 2023 Guy Gaziv, Michael J. Lee, James J. DiCarlo

Because human category reports (aka human percepts) are thought to be insensitive to those same small-norm perturbations -- and locally stable in general -- this argues that ANNs are incomplete scientific models of human visual perception.

How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning?

1 code implementation NeurIPS 2022 Chengxu Zhuang, Violet Xiang, Yoon Bai, Xiaoxuan Jia, Nicholas Turk-Browne, Kenneth Norman, James J. DiCarlo, Daniel LK Yamins

Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.

Self-Supervised Learning

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

1 code implementation NeurIPS 2021 Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung

Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems.

Adversarial Robustness

Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs

1 code implementation NeurIPS Workshop SVRHM 2021 Avinash Baidya, Joel Dapello, James J. DiCarlo, Tiago Marques

Finally, we show that using distillation, it is possible to partially compress the knowledge in the ensemble model into a single model with a V1 front-end.

Wiring Up Vision: Minimizing Supervised Synaptic Updates Needed to Produce a Primate Ventral Stream

1 code implementation ICLR 2022 Franziska Geiger, Martin Schrimpf, Tiago Marques, James J. DiCarlo

Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream.

Developmental Learning Object Recognition

Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations

1 code implementation NeurIPS 2020 Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, James J. DiCarlo

Current state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system.

Object Recognition

Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?

1 code implementation2 Jan 2020 Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, James J. DiCarlo

We therefore developed Brain-Score – a composite of multiple neural and behavioral benchmarks that score any ANN on how similar it is to the brain’s mechanisms for core object recognition – and we deployed it to evaluate a wide range of state-of-the-art deep ANNs.

Object Recognition

Aligning Artificial Neural Networks to the Brain yields Shallow Recurrent Architectures

no code implementations ICLR 2019 Jonas Kubilius, Martin Schrimpf, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Kailyn Schmidt, Aran Nayebi, Daniel Bear, Daniel L. K. Yamins, James J. DiCarlo

Deep artificial neural networks with spatially repeated processing (a. k. a., deep convolutional ANNs) have been established as the best class of candidate models of visual processing in the primate ventral visual processing stream.

Anatomy Object Categorization

Teacher Guided Architecture Search

no code implementations ICCV 2019 Pouya Bashivan, Mark Tensen, James J. DiCarlo

We further show that measurements from only ~300 neurons from primate visual system provides enough signal to find a network with an Imagenet top-1 error that is significantly lower than that achieved by performance-guided architecture search alone.

Computational Efficiency Neural Architecture Search

Task-Driven Convolutional Recurrent Models of the Visual System

1 code implementation NeurIPS 2018 Aran Nayebi, Daniel Bear, Jonas Kubilius, Kohitij Kar, Surya Ganguli, David Sussillo, James J. DiCarlo, Daniel L. K. Yamins

Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet.

General Classification Object Recognition

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

no code implementations12 Jun 2014 Charles F. Cadieu, Ha Hong, Daniel L. K. Yamins, Nicolas Pinto, Diego Ardila, Ethan A. Solomon, Najib J. Majaj, James J. DiCarlo

Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task.

Object Object Recognition

Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream

no code implementations NeurIPS 2013 Daniel L. Yamins, Ha Hong, Charles Cadieu, James J. DiCarlo

In this work, we construct models of the ventral stream using a novel optimization procedure for category-level object recognition problems, and produce RDMs resembling both macaque IT and human ventral stream.

Object Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.