Search Results for author: Shibani Santurkar

Found 22 papers, 12 papers with code

Statistical Bias in Dataset Replication

no code implementations ICML 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

Dataset replication is a useful tool for assessing whether models have overfit to a specific validation set or the exact circumstances under which it was generated.

3DB: A Framework for Debugging Computer Vision Models

1 code implementation7 Jun 2021 Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry

We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.

Leveraging Sparse Linear Layers for Debuggable Deep Networks

2 code implementations11 May 2021 Eric Wong, Shibani Santurkar, Aleksander Mądry

We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.

BREEDS: Benchmarks for Subpopulation Shift

2 code implementations ICLR 2021 Shibani Santurkar, Dimitris Tsipras, Aleksander Madry

We develop a methodology for assessing the robustness of models to subpopulation shift---specifically, their ability to generalize to novel data subpopulations that were not observed during training.

Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO

1 code implementation25 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO).

Identifying Statistical Bias in Dataset Replication

1 code implementation19 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for a standard human-in-the-loop measure of data quality.

Implementation Matters in Deep RL: A Case Study on PPO and TRPO

2 code implementations ICLR 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms, Proximal Policy Optimization and Trust Region Policy Optimization.

Image Synthesis with a Single (Robust) Classifier

2 code implementations NeurIPS 2019 Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry

We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.

Ranked #41 on Image Generation on CIFAR-10 (Inception score metric)

Image Generation

Adversarial Robustness as a Prior for Learned Representations

3 code implementations3 Jun 2019 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry

In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.

Adversarial Examples Are Not Bugs, They Are Features

3 code implementations NeurIPS 2019 Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry

Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear.

A Closer Look at Deep Policy Gradients

no code implementations ICLR 2020 Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development.

Value prediction

Robustness May Be at Odds with Accuracy

7 code implementations ICLR 2019 Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry

We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.

How Does Batch Normalization Help Optimization?

8 code implementations NeurIPS 2018 Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry

Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs).

Adversarially Robust Generalization Requires More Data

no code implementations NeurIPS 2018 Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry

We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.

General Classification Image Classification

A Classification-Based Perspective on GAN Distributions

no code implementations ICLR 2018 Shibani Santurkar, Ludwig Schmidt, Aleksander Madry

A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.

Classification General Classification

A Classification-Based Study of Covariate Shift in GAN Distributions

no code implementations ICML 2018 Shibani Santurkar, Ludwig Schmidt, Aleksander Mądry

A basic, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether they are truly able to capture all the fundamental characteristics of the distributions they are trained on.

Classification General Classification

Generative Compression

no code implementations4 Mar 2017 Shibani Santurkar, David Budden, Nir Shavit

Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed.

Video Compression

Toward Streaming Synapse Detection with Compositional ConvNets

no code implementations23 Feb 2017 Shibani Santurkar, David Budden, Alexander Matveev, Heather Berlin, Hayk Saribekyan, Yaron Meirovitch, Nir Shavit

Connectomics is an emerging field in neuroscience that aims to reconstruct the 3-dimensional morphology of neurons from electron microscopy (EM) images.

Electron Microscopy

Deep Tensor Convolution on Multicores

no code implementations ICML 2017 David Budden, Alexander Matveev, Shibani Santurkar, Shraman Ray Chaudhuri, Nir Shavit

Deep convolutional neural networks (ConvNets) of 3-dimensional kernels allow joint modeling of spatiotemporal features.

Sub-threshold CMOS Spiking Neuron Circuit Design for Navigation Inspired by C. elegans Chemotaxis

no code implementations29 Oct 2014 Shibani Santurkar, Bipin Rajendran

We demonstrate a spiking neural network for navigation motivated by the chemotaxis network of Caenorhabditis elegans.

A neural circuit for navigation inspired by C. elegans Chemotaxis

no code implementations29 Oct 2014 Shibani Santurkar, Bipin Rajendran

In order to harness the computational advantages spiking neural networks promise over their non-spiking counterparts, we develop a network comprising 7-spiking neurons with non-plastic synapses which we show is extremely robust in tracking a range of concentrations.

Cannot find the paper you are looking for? You can Submit a new open access paper.