Dataset replication is a useful tool for assessing whether models have overfit to a specific validation set or the exact circumstances under which it was generated.
The development of CLIP [Radford et al., 2021] has sparked a debate on whether language supervision can result in vision models with more transferable representations than traditional image-only methods.
We present a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules.
1 code implementation • 7 Jun 2021 • Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry
We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.
We develop a methodology for assessing the robustness of models to subpopulation shift---specifically, their ability to generalize to novel data subpopulations that were not observed during training.
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO).
Building rich machine learning datasets in a scalable manner often necessitates a crowd-sourced data collection pipeline.
We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for a standard human-in-the-loop measure of data quality.
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms, Proximal Policy Optimization and Trust Region Policy Optimization.
We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.
Ranked #53 on Image Generation on CIFAR-10 (Inception score metric)
In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear.
We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development.
We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.
We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.
A basic, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether they are truly able to capture all the fundamental characteristics of the distributions they are trained on.
Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed.
Connectomics is an emerging field in neuroscience that aims to reconstruct the 3-dimensional morphology of neurons from electron microscopy (EM) images.
We demonstrate a spiking neural network for navigation motivated by the chemotaxis network of Caenorhabditis elegans.
In order to harness the computational advantages spiking neural networks promise over their non-spiking counterparts, we develop a network comprising 7-spiking neurons with non-plastic synapses which we show is extremely robust in tracking a range of concentrations.