Search Results for author: Daniel L. K. Yamins

Found 26 papers, 14 papers with code

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

no code implementations12 Jun 2014 Charles F. Cadieu, Ha Hong, Daniel L. K. Yamins, Nicolas Pinto, Diego Ardila, Ethan A. Solomon, Najib J. Majaj, James J. DiCarlo

Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task.

Object Object Recognition

A Useful Motif for Flexible Task Learning in an Embodied Two-Dimensional Visual Environment

no code implementations22 Jun 2017 Kevin T. Feigelis, Daniel L. K. Yamins

Recent results from neuroscience and artificial intelligence suggest the role of the general purpose visual representation may be played by a deep convolutional neural network, and give some clues how task modules based on such a representation might be discovered and constructed.

Modular Continual Learning in a Unified Visual Environment

no code implementations ICLR 2018 Kevin T. Feigelis, Blue Sheffer, Daniel L. K. Yamins

A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly.

Continual Learning

Learning to Play with Intrinsically-Motivated Self-Aware Agents

no code implementations21 Feb 2018 Nick Haber, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins

We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering.

motion prediction Object

Emergence of Structured Behaviors from Curiosity-Based Intrinsic Motivation

no code implementations21 Feb 2018 Nick Haber, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins

Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks.

motion prediction Object

Task-Driven Convolutional Recurrent Models of the Visual System

1 code implementation NeurIPS 2018 Aran Nayebi, Daniel Bear, Jonas Kubilius, Kohitij Kar, Surya Ganguli, David Sussillo, James J. DiCarlo, Daniel L. K. Yamins

Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet.

General Classification Object Recognition

Flexible Neural Representation for Physics Prediction

no code implementations NeurIPS 2018 Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins

Humans have a remarkable capacity to understand the physical dynamics of objects in their environment, flexibly capturing complex structures and interactions at multiple levels of detail.

Relation Network

Aligning Artificial Neural Networks to the Brain yields Shallow Recurrent Architectures

no code implementations ICLR 2019 Jonas Kubilius, Martin Schrimpf, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Kailyn Schmidt, Aran Nayebi, Daniel Bear, Daniel L. K. Yamins, James J. DiCarlo

Deep artificial neural networks with spatially repeated processing (a. k. a., deep convolutional ANNs) have been established as the best class of candidate models of visual processing in the primate ventral visual processing stream.

Anatomy Object Categorization

Stochastic Neural Physics Predictor

no code implementations25 Sep 2019 Piotr Tatarczyk, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins, Nils Thuerey

Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way.

Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?

1 code implementation2 Jan 2020 Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, James J. DiCarlo

We therefore developed Brain-Score – a composite of multiple neural and behavioral benchmarks that score any ANN on how similar it is to the brain’s mechanisms for core object recognition – and we deployed it to evaluate a wide range of state-of-the-art deep ANNs.

Object Recognition

Two Routes to Scalable Credit Assignment without Weight Symmetry

1 code implementation ICML 2020 Daniel Kunin, Aran Nayebi, Javier Sagastuy-Brena, Surya Ganguli, Jonathan M. Bloom, Daniel L. K. Yamins

The neural plausibility of backpropagation has long been disputed, primarily for its use of non-local weight transport $-$ the biologically dubious requirement that one neuron instantaneously measure the synaptic weights of another.

Vocal Bursts Valence Prediction

Visual Grounding of Learned Physical Models

1 code implementation ICML 2020 Yunzhu Li, Toru Lin, Kexin Yi, Daniel M. Bear, Daniel L. K. Yamins, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba

The abilities to perform physical reasoning and to adapt to new environments, while intrinsic to humans, remain challenging to state-of-the-art computational models.

Visual Grounding

Pruning neural networks without any data by iteratively conserving synaptic flow

5 code implementations NeurIPS 2020 Hidenori Tanaka, Daniel Kunin, Daniel L. K. Yamins, Surya Ganguli

Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time.

Learning Physical Graph Representations from Visual Scenes

1 code implementation NeurIPS 2020 Daniel M. Bear, Chaofei Fan, Damian Mrowca, Yunzhu Li, Seth Alter, Aran Nayebi, Jeremy Schwartz, Li Fei-Fei, Jiajun Wu, Joshua B. Tenenbaum, Daniel L. K. Yamins

To overcome these limitations, we introduce the idea of Physical Scene Graphs (PSGs), which represent scenes as hierarchical graphs, with nodes in the hierarchy corresponding intuitively to object parts at different scales, and edges to physical connections between parts.

Object Object Categorization +1

Identifying Learning Rules From Neural Network Observables

2 code implementations NeurIPS 2020 Aran Nayebi, Sanjana Srivastava, Surya Ganguli, Daniel L. K. Yamins

We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes, and that these results generalize to limited access to the trajectory and held-out architectures and learning curricula.

Open-Ended Question Answering

Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics

1 code implementation8 Dec 2020 Daniel Kunin, Javier Sagastuy-Brena, Surya Ganguli, Daniel L. K. Yamins, Hidenori Tanaka

Overall, by exploiting symmetry, our work demonstrates that we can analytically describe the learning dynamics of various parameter combinations at finite learning rates and batch sizes for state of the art architectures trained on any dataset.

Measuring and Modeling Physical Intrinsic Motivation

no code implementations22 May 2023 Julio Martinez, Felix Binder, Haoliang Wang, Nick Haber, Judith Fan, Daniel L. K. Yamins

Finally, linearly combining the adversarial model with the number of collisions in a scene leads to the greatest improvement in predictivity of human responses, suggesting humans are driven towards scenarios that result in high information gain and physical activity.

Developmental Curiosity and Social Interaction in Virtual Agents

no code implementations22 May 2023 Chris Doyle, Sarah Shader, Michelle Lau, Megumi Sano, Daniel L. K. Yamins, Nick Haber

We also find that learning a world model in the presence of an attentive caregiver helps the infant agent learn how to predict scenarios with challenging social and physical dynamics.

Unifying (Machine) Vision via Counterfactual World Modeling

no code implementations2 Jun 2023 Daniel M. Bear, Kevin Feigelis, Honglin Chen, Wanhee Lee, Rahul Venkatesh, Klemen Kotar, Alex Durango, Daniel L. K. Yamins

Leading approaches in machine vision employ different architectures for different tasks, trained on costly task-specific labeled datasets.

counterfactual Optical Flow Estimation

Counterfactual World Modeling for Physical Dynamics Understanding

no code implementations11 Dec 2023 Rahul Venkatesh, Honglin Chen, Kevin Feigelis, Daniel M. Bear, Khaled Jedoui, Klemen Kotar, Felix Binder, Wanhee Lee, Sherry Liu, Kevin A. Smith, Judith E. Fan, Daniel L. K. Yamins

Third, the counterfactual modeling capability enables the design of counterfactual queries to extract vision structures similar to keypoints, optical flows, and segmentations, which are useful for dynamics understanding.

counterfactual

Cannot find the paper you are looking for? You can Submit a new open access paper.