Search Results for author: Alex Fang

Found 14 papers, 8 papers with code

CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive Learning

no code implementations29 May 2024 Yiping Wang, Yifang Chen, Wendan Yan, Alex Fang, Wenjing Zhou, Kevin Jamieson, Simon Shaolei Du

By combining our methods with the current best methods DFN~\cite{fang2023data} and HYPE~\cite{kim2024hype}, we can boost average performance on downstream tasks by 0. 9\%, achieving a new state-of-the-art.

Contrastive Learning Language Modelling

URDFormer: A Pipeline for Constructing Articulated Simulation Environments from Real-World Images

no code implementations19 May 2024 Zoey Chen, Aaron Walsman, Marius Memmel, Kaichun Mo, Alex Fang, Karthikeya Vemuri, Alan Wu, Dieter Fox, Abhishek Gupta

We present an integrated end-to-end pipeline that generates simulation scenes complete with articulated kinematic and dynamic structures from real-world images and use these for training robotic control policies.

Scene Generation

Data Filtering Networks

2 code implementations29 Sep 2023 Alex Fang, Albin Madappally Jose, Amit Jain, Ludwig Schmidt, Alexander Toshev, Vaishaal Shankar

Our key finding is that the quality of a network for filtering is distinct from its performance on downstream tasks: for instance, a model that performs well on ImageNet can yield worse training sets than a model with low ImageNet accuracy that is trained on a small amount of high-quality data.

Language Modelling

Neural Priming for Sample-Efficient Adaptation

1 code implementation NeurIPS 2023 Matthew Wallingford, Vivek Ramanujan, Alex Fang, Aditya Kusupati, Roozbeh Mottaghi, Aniruddha Kembhavi, Ludwig Schmidt, Ali Farhadi

Performing lightweight updates on the recalled data significantly improves accuracy across a variety of distribution shift and transfer learning benchmarks.

Transfer Learning

Neural Radiance Field Codebooks

1 code implementation10 Jan 2023 Matthew Wallingford, Aditya Kusupati, Alex Fang, Vivek Ramanujan, Aniruddha Kembhavi, Roozbeh Mottaghi, Ali Farhadi

Compositional representations of the world are a promising step towards enabling high-level scene understanding and efficient transfer to downstream tasks.

Object Representation Learning +1

Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)

2 code implementations3 May 2022 Alex Fang, Gabriel Ilharco, Mitchell Wortsman, Yuhao Wan, Vaishaal Shankar, Achal Dave, Ludwig Schmidt

Contrastively trained language-image models such as CLIP, ALIGN, and BASIC have demonstrated unprecedented robustness to multiple challenging natural distribution shifts.

Ranked #94 on Image Classification on ObjectNet (using extra training data)

Image Classification

The ISO Standard for Dialogue Act Annotation, Second Edition

no code implementations LREC 2020 Harry Bunt, Volha Petukhova, Emer Gilmartin, Catherine Pelachaud, Alex Fang, Simon Keizer, Laurent Pr{\'e}vot

ISO standard 24617-2 for dialogue act annotation, established in 2012, has in the past few years been used both in corpus annotation and in the design of components for spoken and multimodal dialogue systems.

The DialogBank

no code implementations LREC 2016 Harry Bunt, Volha Petukhova, Andrei Malchanau, Kars Wijnhoven, Alex Fang

Some of these dialogues have been taken from existing corpora and have been re-annotated according to the ISO standard; others have been annotated directly according to the standard.

Cannot find the paper you are looking for? You can Submit a new open access paper.