no code implementations • ICML 2020 • Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry
Dataset replication is a useful tool for assessing whether models have overfit to a specific validation set or the exact circumstances under which it was generated.
no code implementations • 15 Jul 2022 • Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang, Tatsunori Hashimoto
The development of CLIP [Radford et al., 2021] has sparked a debate on whether language supervision can result in vision models with more transferable representations than traditional image-only methods.
1 code implementation • NeurIPS 2021 • Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry
We present a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules.
1 code implementation • 7 Jun 2021 • Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry
We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.
2 code implementations • 11 May 2021 • Eric Wong, Shibani Santurkar, Aleksander Mądry
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.
2 code implementations • ICLR 2021 • Shibani Santurkar, Dimitris Tsipras, Aleksander Madry
We develop a methodology for assessing the robustness of models to subpopulation shift---specifically, their ability to generalize to novel data subpopulations that were not observed during training.
1 code implementation • 25 May 2020 • Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO).
1 code implementation • ICML 2020 • Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Andrew Ilyas, Aleksander Madry
Building rich machine learning datasets in a scalable manner often necessitates a crowd-sourced data collection pipeline.
1 code implementation • 19 May 2020 • Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry
We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for a standard human-in-the-loop measure of data quality.
2 code implementations • ICLR 2020 • Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms, Proximal Policy Optimization and Trust Region Policy Optimization.
1 code implementation • NeurIPS 2019 • Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry
We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.
Ranked #53 on
Image Generation
on CIFAR-10
(Inception score metric)
5 code implementations • 3 Jun 2019 • Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry
In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.
3 code implementations • NeurIPS 2019 • Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear.
1 code implementation • ICLR 2020 • Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry
We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development.
7 code implementations • ICLR 2019 • Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry
We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.
11 code implementations • NeurIPS 2018 • Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry
Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs).
no code implementations • NeurIPS 2018 • Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry
We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
no code implementations • ICLR 2018 • Shibani Santurkar, Ludwig Schmidt, Aleksander Madry
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.
no code implementations • ICML 2018 • Shibani Santurkar, Ludwig Schmidt, Aleksander Mądry
A basic, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether they are truly able to capture all the fundamental characteristics of the distributions they are trained on.
no code implementations • 4 Mar 2017 • Shibani Santurkar, David Budden, Nir Shavit
Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed.
no code implementations • 23 Feb 2017 • Shibani Santurkar, David Budden, Alexander Matveev, Heather Berlin, Hayk Saribekyan, Yaron Meirovitch, Nir Shavit
Connectomics is an emerging field in neuroscience that aims to reconstruct the 3-dimensional morphology of neurons from electron microscopy (EM) images.
no code implementations • ICML 2017 • David Budden, Alexander Matveev, Shibani Santurkar, Shraman Ray Chaudhuri, Nir Shavit
Deep convolutional neural networks (ConvNets) of 3-dimensional kernels allow joint modeling of spatiotemporal features.
no code implementations • 29 Oct 2014 • Shibani Santurkar, Bipin Rajendran
We demonstrate a spiking neural network for navigation motivated by the chemotaxis network of Caenorhabditis elegans.
no code implementations • 29 Oct 2014 • Shibani Santurkar, Bipin Rajendran
In order to harness the computational advantages spiking neural networks promise over their non-spiking counterparts, we develop a network comprising 7-spiking neurons with non-plastic synapses which we show is extremely robust in tracking a range of concentrations.