no code implementations • ICML 2020 • Kuno Kim, Megumi Sano, Julian De Freitas, Nick Haber, Daniel Yamins
World models are a family of predictive models that solve self-supervised problems on how the world evolves.
no code implementations • 3 Apr 2021 • Rosa Cao, Daniel Yamins
These criteria require us, first, to identify a level of description that is both abstract but detailed enough to be "runnable", and then, to construct model-to-brain mappings using the same principles as those employed for brain-to-brain mapping across individuals.
no code implementations • 3 Apr 2021 • Rosa Cao, Daniel Yamins
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain.
1 code implementation • ICLR 2021 • Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
no code implementations • 15 Jul 2020 • Kuno Kim, Megumi Sano, Julian De Freitas, Nick Haber, Daniel Yamins
Humans learn world models by curiously exploring their environment, in the process acquiring compact abstractions of high bandwidth sensory inputs, the ability to plan across long temporal horizons, and an understanding of the behavioral patterns of other agents.
no code implementations • 27 May 2020 • Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, Noah Goodman
Reformulating previous learning objectives in terms of mutual information also simplifies and stabilizes them.
no code implementations • ICML 2020 • Aidan Curtis, Minjian Xin, Dilip Arumugam, Kevin Feigelis, Daniel Yamins
In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances.
1 code implementation • CVPR 2020 • Chengxu Zhuang, Tianwei She, Alex Andonian, Max Sobol Mark, Daniel Yamins
Because of the rich dynamical structure of videos and their ubiquity in everyday life, it is a natural idea that video data could serve as a powerful unsupervised learning signal for training visual representations in deep neural networks.
no code implementations • 28 May 2019 • Chengxu Zhuang, Xuehao Ding, Divyanshu Murli, Daniel Yamins
It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood.
1 code implementation • ICCV 2019 • Chengxu Zhuang, Alex Lin Zhai, Daniel Yamins
Unsupervised approaches to learning in neural networks are of substantial interest for furthering artificial intelligence, both because they would enable the training of networks without the need for large numbers of expensive annotations, and because they would be better models of the kind of general-purpose learning deployed by humans.
Ranked #12 on Contrastive Learning on imagenet-1k
1 code implementation • NeurIPS 2017 • Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins
In large part, rodents see the world through their whiskers, a powerful tactile sense enabled by a series of brain areas that form the whisker-trigeminal system.