Search Results for author: Chengxu Zhuang

Found 13 papers, 8 papers with code

[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

no code implementations9 Apr 2024 Leshem Choshen, Ryan Cotterell, Michael Y. Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, Chengxu Zhuang

The big changes for this year's competition are as follows: First, we replace the loose track with a paper track, which allows (for example) non-model-based submissions, novel cognitively-inspired benchmarks, or analysis techniques.

Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling

1 code implementation21 Mar 2024 Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

Today's most accurate language models are trained on orders of magnitude more language data than human language learners receive - but with no supervision from other sensory modalities that play a crucial role in human learning.

Grounded language learning Language Modelling +2

Visual Grounding Helps Learn Word Meanings in Low-Data Regimes

1 code implementation20 Oct 2023 Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

But to achieve these results, LMs must be trained in distinctly un-human-like ways - requiring orders of magnitude more language data than children receive during development, and without perceptual or social context.

Image Captioning Language Acquisition +5

Call for Papers -- The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

1 code implementation27 Jan 2023 Alex Warstadt, Leshem Choshen, Aaron Mueller, Adina Williams, Ethan Wilcox, Chengxu Zhuang

In partnership with CoNLL and CMCL, we provide a platform for approaches to pretraining with a limited-size corpus sourced from data inspired by the input to children.

Language Acquisition Language Modelling +1

How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning?

1 code implementation NeurIPS 2022 Chengxu Zhuang, Violet Xiang, Yoon Bai, Xiaoxuan Jia, Nicholas Turk-Browne, Kenneth Norman, James J. DiCarlo, Daniel LK Yamins

Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.

Self-Supervised Learning

Conditional Negative Sampling for Contrastive Learning of Visual Representations

1 code implementation ICLR 2021 Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman

To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.

Contrastive Learning Instance Segmentation +4

Local Label Propagation for Large-Scale Semi-Supervised Learning

no code implementations28 May 2019 Chengxu Zhuang, Xuehao Ding, Divyanshu Murli, Daniel Yamins

It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood.

Clustering Scene Recognition

Unsupervised Learning from Video with Deep Neural Embeddings

1 code implementation CVPR 2020 Chengxu Zhuang, Tianwei She, Alex Andonian, Max Sobol Mark, Daniel Yamins

Because of the rich dynamical structure of videos and their ubiquity in everyday life, it is a natural idea that video data could serve as a powerful unsupervised learning signal for training visual representations in deep neural networks.

Action Recognition Object Recognition

Local Aggregation for Unsupervised Learning of Visual Embeddings

1 code implementation ICCV 2019 Chengxu Zhuang, Alex Lin Zhai, Daniel Yamins

Unsupervised approaches to learning in neural networks are of substantial interest for furthering artificial intelligence, both because they would enable the training of networks without the need for large numbers of expensive annotations, and because they would be better models of the kind of general-purpose learning deployed by humans.

Clustering Contrastive Learning +6

Flexible Neural Representation for Physics Prediction

no code implementations NeurIPS 2018 Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins

Humans have a remarkable capacity to understand the physical dynamics of objects in their environment, flexibly capturing complex structures and interactions at multiple levels of detail.

Relation Network

Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System

1 code implementation NeurIPS 2017 Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins

In large part, rodents see the world through their whiskers, a powerful tactile sense enabled by a series of brain areas that form the whisker-trigeminal system.

Decision Making

Predictive Encoding of Contextual Relationships for Perceptual Inference, Interpolation and Prediction

no code implementations14 Nov 2014 Ming-Min Zhao, Chengxu Zhuang, Yizhou Wang, Tai Sing Lee

We propose a new neurally-inspired model that can learn to encode the global relationship context of visual events across time and space and to use the contextual information to modulate the analysis by synthesis process in a predictive coding framework.

Cannot find the paper you are looking for? You can Submit a new open access paper.