Search Results for author: Vaishaal Shankar

Found 25 papers, 13 papers with code

Flare Prediction Using Photospheric and Coronal Image Data

no code implementations3 Aug 2017 Eric Jonas, Monica G. Bobra, Vaishaal Shankar, J. Todd Hoeksema, Benjamin Recht

This is the first attempt to predict solar flares using photospheric vector magnetic field data as well as multiple wavelengths of image data from the chromosphere, transition region, and corona.

Do CIFAR-10 Classifiers Generalize to CIFAR-10?

3 code implementations1 Jun 2018 Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar

Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models.

Cloud Programming Simplified: A Berkeley View on Serverless Computing

no code implementations9 Feb 2019 Eric Jonas, Johann Schleier-Smith, Vikram Sreekanti, Chia-Che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, Joao Carreira, Karl Krauth, Neeraja Yadwadkar, Joseph E. Gonzalez, Raluca Ada Popa, Ion Stoica, David A. Patterson

Serverless cloud computing handles virtually all the system administration operations needed to make it easier for programmers to use the cloud.

Operating Systems

Do Image Classifiers Generalize Across Time?

1 code implementation ICCV 2021 Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, Ludwig Schmidt

Additionally, we evaluate three detection models and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points.

General Classification Video Object Detection

A Meta-Analysis of Overfitting in Machine Learning

no code implementations NeurIPS 2019 Rebecca Roelofs, Vaishaal Shankar, Benjamin Recht, Sara Fridovich-Keil, Moritz Hardt, John Miller, Ludwig Schmidt

By systematically comparing the public ranking with the final ranking, we assess how much participants adapted to the holdout set over the course of a competition.

BIG-bench Machine Learning Holdout Set

Serverless Straggler Mitigation using Local Error-Correcting Codes

1 code implementation21 Jan 2020 Vipul Gupta, Dominic Carrano, Yaoqing Yang, Vaishaal Shankar, Thomas Courtade, Kannan Ramchandran

Inexpensive cloud services, such as serverless computing, are often vulnerable to straggling nodes that increase end-to-end latency for distributed computation.

Distributed, Parallel, and Cluster Computing Information Theory Information Theory

A Generalizable and Accessible Approach to Machine Learning with Global Satellite Imagery

no code implementations16 Oct 2020 Esther Rolf, Jonathan Proctor, Tamma Carleton, Ian Bolliger, Vaishaal Shankar, Miyabi Ishihara, Benjamin Recht, Solomon Hsiang

Combining satellite imagery with machine learning (SIML) has the potential to address global challenges by remotely estimating socioeconomic and environmental conditions in data-poor regions, yet the resource requirements of SIML limit its accessibility and use.

BIG-bench Machine Learning regression +1

Predicting with Confidence on Unseen Distributions

no code implementations ICCV 2021 Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, Ludwig Schmidt

Our work connects techniques from domain adaptation and predictive uncertainty literature, and allows us to predict model accuracy on challenging unseen distributions without access to labeled data.

Domain Adaptation

Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)

2 code implementations3 May 2022 Alex Fang, Gabriel Ilharco, Mitchell Wortsman, Yuhao Wan, Vaishaal Shankar, Achal Dave, Ludwig Schmidt

Contrastively trained language-image models such as CLIP, ALIGN, and BASIC have demonstrated unprecedented robustness to multiple challenging natural distribution shifts.

Ranked #94 on Image Classification on ObjectNet (using extra training data)

Image Classification

Masked Autoencoding Does Not Help Natural Language Supervision at Scale

no code implementations CVPR 2023 Floris Weers, Vaishaal Shankar, Angelos Katharopoulos, Yinfei Yang, Tom Gunter

Self supervision and natural language supervision have emerged as two exciting ways to train general purpose image encoders which excel at a variety of downstream tasks.

On Robustness in Multimodal Learning

no code implementations10 Apr 2023 Brandon McKinzie, Joseph Cheng, Vaishaal Shankar, Yinfei Yang, Jonathon Shlens, Alexander Toshev

Multimodal learning is defined as learning over multiple heterogeneous input modalities such as video, audio, and text.

Representation Learning

Data Filtering Networks

2 code implementations29 Sep 2023 Alex Fang, Albin Madappally Jose, Amit Jain, Ludwig Schmidt, Alexander Toshev, Vaishaal Shankar

Our key finding is that the quality of a network for filtering is distinct from its performance on downstream tasks: for instance, a model that performs well on ImageNet can yield worse training sets than a model with low ImageNet accuracy that is trained on a small amount of high-quality data.

Language Modelling

Robust multimodal models have outlier features and encode more concepts

no code implementations19 Oct 2023 Jonathan Crabbé, Pau Rodríguez, Vaishaal Shankar, Luca Zappella, Arno Blaas

In this work, we bridge this gap by probing the representation spaces of 12 robust multimodal models with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp).

TiC-CLIP: Continual Training of CLIP Models

1 code implementation24 Oct 2023 Saurabh Garg, Mehrdad Farajtabar, Hadi Pouransari, Raviteja Vemulapalli, Sachin Mehta, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri

We introduce the first set of web-scale Time-Continual (TiC) benchmarks for training vision-language models: TiC-DataComp, TiC-YFCC, and TiC-Redcaps.

Continual Learning Retrieval

Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation

no code implementations27 Nov 2023 Yuhui Zhang, Brandon McKinzie, Zhe Gan, Vaishaal Shankar, Alexander Toshev

Recent advances in image tokenizers, such as VQ-VAE, have enabled text-to-image generation using auto-regressive methods, similar to language modeling.

Language Modelling Text-to-Image Generation

Scalable Pre-training of Large Autoregressive Image Models

2 code implementations16 Jan 2024 Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista, Alexander Toshev, Vaishaal Shankar, Joshua M Susskind, Armand Joulin

Specifically, we highlight two key findings: (1) the performance of the visual features scale with both the model capacity and the quantity of data, (2) the value of the objective function correlates with the performance of the model on downstream tasks.

Ranked #333 on Image Classification on ImageNet (using extra training data)

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.