no code implementations • 27 Jan 2023 • Alex Warstadt, Leshem Choshen, Aaron Mueller, Adina Williams, Ethan Wilcox, Chengxu Zhuang
In partnership with CoNLL and CMCL, we provide a platform for approaches to pretraining with a limited-size corpus sourced from data inspired by the input to children.
1 code implementation • NeurIPS 2022 • Chengxu Zhuang, Violet Xiang, Yoon Bai, Xiaoxuan Jia, Nicholas Turk-Browne, Kenneth Norman, James J. DiCarlo, Daniel LK Yamins
Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.
1 code implementation • ICLR 2021 • Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
no code implementations • 27 May 2020 • Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, Noah Goodman
Reformulating previous learning objectives in terms of mutual information also simplifies and stabilizes them.
no code implementations • 28 May 2019 • Chengxu Zhuang, Xuehao Ding, Divyanshu Murli, Daniel Yamins
It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood.
1 code implementation • CVPR 2020 • Chengxu Zhuang, Tianwei She, Alex Andonian, Max Sobol Mark, Daniel Yamins
Because of the rich dynamical structure of videos and their ubiquity in everyday life, it is a natural idea that video data could serve as a powerful unsupervised learning signal for training visual representations in deep neural networks.
1 code implementation • ICCV 2019 • Chengxu Zhuang, Alex Lin Zhai, Daniel Yamins
Unsupervised approaches to learning in neural networks are of substantial interest for furthering artificial intelligence, both because they would enable the training of networks without the need for large numbers of expensive annotations, and because they would be better models of the kind of general-purpose learning deployed by humans.
Ranked #12 on
Contrastive Learning
on imagenet-1k
no code implementations • NeurIPS 2018 • Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins
Humans have a remarkable capacity to understand the physical dynamics of objects in their environment, flexibly capturing complex structures and interactions at multiple levels of detail.
1 code implementation • NeurIPS 2017 • Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins
In large part, rodents see the world through their whiskers, a powerful tactile sense enabled by a series of brain areas that form the whisker-trigeminal system.
no code implementations • 14 Nov 2014 • Ming-Min Zhao, Chengxu Zhuang, Yizhou Wang, Tai Sing Lee
We propose a new neurally-inspired model that can learn to encode the global relationship context of visual events across time and space and to use the contextual information to modulate the analysis by synthesis process in a predictive coding framework.