Search Results for author: Daniel Yamins

Found 11 papers, 4 papers with code

Local Aggregation for Unsupervised Learning of Visual Embeddings

1 code implementation ICCV 2019 Chengxu Zhuang, Alex Lin Zhai, Daniel Yamins

Unsupervised approaches to learning in neural networks are of substantial interest for furthering artificial intelligence, both because they would enable the training of networks without the need for large numbers of expensive annotations, and because they would be better models of the kind of general-purpose learning deployed by humans.

Clustering Contrastive Learning +6

Unsupervised Learning from Video with Deep Neural Embeddings

1 code implementation CVPR 2020 Chengxu Zhuang, Tianwei She, Alex Andonian, Max Sobol Mark, Daniel Yamins

Because of the rich dynamical structure of videos and their ubiquity in everyday life, it is a natural idea that video data could serve as a powerful unsupervised learning signal for training visual representations in deep neural networks.

Action Recognition Object Recognition

Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System

1 code implementation NeurIPS 2017 Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins

In large part, rodents see the world through their whiskers, a powerful tactile sense enabled by a series of brain areas that form the whisker-trigeminal system.

Decision Making

Conditional Negative Sampling for Contrastive Learning of Visual Representations

1 code implementation ICLR 2021 Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman

To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.

Contrastive Learning Instance Segmentation +4

Local Label Propagation for Large-Scale Semi-Supervised Learning

no code implementations28 May 2019 Chengxu Zhuang, Xuehao Ding, Divyanshu Murli, Daniel Yamins

It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood.

Clustering Scene Recognition

Flexible and Efficient Long-Range Planning Through Curious Exploration

no code implementations ICML 2020 Aidan Curtis, Minjian Xin, Dilip Arumugam, Kevin Feigelis, Daniel Yamins

In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances.

Imitation Learning Model-based Reinforcement Learning +4

Active World Model Learning with Progress Curiosity

no code implementations15 Jul 2020 Kuno Kim, Megumi Sano, Julian De Freitas, Nick Haber, Daniel Yamins

Humans learn world models by curiously exploring their environment, in the process acquiring compact abstractions of high bandwidth sensory inputs, the ability to plan across long temporal horizons, and an understanding of the behavioral patterns of other agents.

Active World Model Learning in Agent-rich Environments with Progress Curiosity

no code implementations ICML 2020 Kuno Kim, Megumi Sano, Julian De Freitas, Nick Haber, Daniel Yamins

World models are a family of predictive models that solve self-supervised problems on how the world evolves.

Explanatory models in neuroscience: Part 1 -- taking mechanistic abstraction seriously

no code implementations3 Apr 2021 Rosa Cao, Daniel Yamins

These criteria require us, first, to identify a level of description that is both abstract but detailed enough to be "runnable", and then, to construct model-to-brain mappings using the same principles as those employed for brain-to-brain mapping across individuals.

Explanatory models in neuroscience: Part 2 -- constraint-based intelligibility

no code implementations3 Apr 2021 Rosa Cao, Daniel Yamins

Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain.

Cannot find the paper you are looking for? You can Submit a new open access paper.