no code implementations • 9 Apr 2024 • Leshem Choshen, Ryan Cotterell, Michael Y. Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, Chengxu Zhuang
The big changes for this year's competition are as follows: First, we replace the loose track with a paper track, which allows (for example) non-model-based submissions, novel cognitively-inspired benchmarks, or analysis techniques.
1 code implementation • 21 Mar 2024 • Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas
Today's most accurate language models are trained on orders of magnitude more language data than human language learners receive - but with no supervision from other sensory modalities that play a crucial role in human learning.
1 code implementation • 20 Oct 2023 • Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas
But to achieve these results, LMs must be trained in distinctly un-human-like ways - requiring orders of magnitude more language data than children receive during development, and without perceptual or social context.
1 code implementation • 27 Jan 2023 • Alex Warstadt, Leshem Choshen, Aaron Mueller, Adina Williams, Ethan Wilcox, Chengxu Zhuang
In partnership with CoNLL and CMCL, we provide a platform for approaches to pretraining with a limited-size corpus sourced from data inspired by the input to children.
1 code implementation • NeurIPS 2022 • Chengxu Zhuang, Violet Xiang, Yoon Bai, Xiaoxuan Jia, Nicholas Turk-Browne, Kenneth Norman, James J. DiCarlo, Daniel LK Yamins
Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.
1 code implementation • ICLR 2021 • Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
no code implementations • 27 May 2020 • Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, Noah Goodman
Reformulating previous learning objectives in terms of mutual information also simplifies and stabilizes them.
1 code implementation • CVPR 2020 • Chengxu Zhuang, Tianwei She, Alex Andonian, Max Sobol Mark, Daniel Yamins
Because of the rich dynamical structure of videos and their ubiquity in everyday life, it is a natural idea that video data could serve as a powerful unsupervised learning signal for training visual representations in deep neural networks.
no code implementations • 28 May 2019 • Chengxu Zhuang, Xuehao Ding, Divyanshu Murli, Daniel Yamins
It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood.
1 code implementation • ICCV 2019 • Chengxu Zhuang, Alex Lin Zhai, Daniel Yamins
Unsupervised approaches to learning in neural networks are of substantial interest for furthering artificial intelligence, both because they would enable the training of networks without the need for large numbers of expensive annotations, and because they would be better models of the kind of general-purpose learning deployed by humans.
Ranked #12 on Contrastive Learning on imagenet-1k
no code implementations • NeurIPS 2018 • Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins
Humans have a remarkable capacity to understand the physical dynamics of objects in their environment, flexibly capturing complex structures and interactions at multiple levels of detail.
1 code implementation • NeurIPS 2017 • Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins
In large part, rodents see the world through their whiskers, a powerful tactile sense enabled by a series of brain areas that form the whisker-trigeminal system.
no code implementations • 14 Nov 2014 • Ming-Min Zhao, Chengxu Zhuang, Yizhou Wang, Tai Sing Lee
We propose a new neurally-inspired model that can learn to encode the global relationship context of visual events across time and space and to use the contextual information to modulate the analysis by synthesis process in a predictive coding framework.