Search Results for author: Aleksander Madry

Found 36 papers, 26 papers with code

Statistical Bias in Dataset Replication

no code implementations ICML 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

Dataset replication is a useful tool for assessing whether models have overfit to a specific validation set or the exact circumstances under which it was generated.

Missingness Bias in Model Debugging

1 code implementation ICLR 2022 Saachi Jain, Hadi Salman, Eric Wong, Pengchuan Zhang, Vibhav Vineet, Sai Vemprala, Aleksander Madry

Missingness, or the absence of features from an input, is a concept fundamental to many model debugging tools.

Datamodels: Predicting Predictions from Training Data

1 code implementation1 Feb 2022 Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, Aleksander Madry

We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data.

On Distinctive Properties of Universal Perturbations

no code implementations31 Dec 2021 Sung Min Park, Kuo-An Wei, Kai Xiao, Jerry Li, Aleksander Madry

We identify properties of universal adversarial perturbations (UAPs) that distinguish them from standard adversarial perturbations.

Editing a classifier by rewriting its prediction rules

1 code implementation NeurIPS 2021 Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry

We present a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules.

Combining Diverse Feature Priors

1 code implementation15 Oct 2021 Saachi Jain, Dimitris Tsipras, Aleksander Madry

To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly.

3DB: A Framework for Debugging Computer Vision Models

1 code implementation7 Jun 2021 Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry

We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.

Non-robust Features through the Lens of Universal Perturbations

no code implementations1 Jan 2021 Sung Min Park, Kuo-An Wei, Kai Yuanqing Xiao, Jerry Li, Aleksander Madry

We study universal adversarial perturbations and demonstrate that the above picture is more nuanced.

Unadversarial Examples: Designing Objects for Robust Vision

2 code implementations NeurIPS 2021 Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, Ashish Kapoor

We study a class of realistic computer vision settings wherein one can influence the design of the objects being recognized.

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

no code implementations18 Dec 2020 Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein

As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance.

Data Poisoning

BREEDS: Benchmarks for Subpopulation Shift

2 code implementations ICLR 2021 Shibani Santurkar, Dimitris Tsipras, Aleksander Madry

We develop a methodology for assessing the robustness of models to subpopulation shift---specifically, their ability to generalize to novel data subpopulations that were not observed during training.

Do Adversarially Robust ImageNet Models Transfer Better?

3 code implementations NeurIPS 2020 Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry

Typically, better pre-trained models yield better transfer results, suggesting that initial accuracy is a key aspect of transfer learning performance.

Transfer Learning

Noise or Signal: The Role of Image Backgrounds in Object Recognition

1 code implementation ICLR 2021 Kai Xiao, Logan Engstrom, Andrew Ilyas, Aleksander Madry

We assess the tendency of state-of-the-art object recognition models to depend on signals from image backgrounds.

Object Recognition

Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO

1 code implementation25 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO).

reinforcement-learning

Identifying Statistical Bias in Dataset Replication

1 code implementation19 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for a standard human-in-the-loop measure of data quality.

Implementation Matters in Deep RL: A Case Study on PPO and TRPO

2 code implementations ICLR 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms, Proximal Policy Optimization and Trust Region Policy Optimization.

reinforcement-learning

The Two Regimes of Deep Network Training

no code implementations24 Feb 2020 Guillaume Leclerc, Aleksander Madry

Learning rate schedule has a major impact on the performance of deep learning models.

On Adaptive Attacks to Adversarial Example Defenses

3 code implementations NeurIPS 2020 Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry

Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples.

Label-Consistent Backdoor Attacks

1 code implementation5 Dec 2019 Alexander Turner, Dimitris Tsipras, Aleksander Madry

While such attacks are very effective, they crucially rely on the adversary injecting arbitrary inputs that are---often blatantly---mislabeled.

Image Synthesis with a Single (Robust) Classifier

1 code implementation NeurIPS 2019 Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry

We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.

Ranked #53 on Image Generation on CIFAR-10 (Inception score metric)

Adversarial Robustness Image Generation

Adversarial Robustness as a Prior for Learned Representations

5 code implementations3 Jun 2019 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry

In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.

Adversarial Robustness

Adversarial Examples Are Not Bugs, They Are Features

3 code implementations NeurIPS 2019 Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry

Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear.

Clean-Label Backdoor Attacks

no code implementations ICLR 2019 Alexander Turner, Dimitris Tsipras, Aleksander Madry

Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks.

A Closer Look at Deep Policy Gradients

1 code implementation ICLR 2020 Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development.

Value prediction

Spectral Signatures in Backdoor Attacks

no code implementations NeurIPS 2018 Brandon Tran, Jerry Li, Aleksander Madry

In this paper, we identify a new property of all known backdoor attacks, which we call \emph{spectral signatures}.

Data Poisoning

Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors

3 code implementations ICLR 2019 Andrew Ilyas, Logan Engstrom, Aleksander Madry

We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available.

Robustness May Be at Odds with Accuracy

7 code implementations ICLR 2019 Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry

We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.

Adversarial Robustness

How Does Batch Normalization Help Optimization?

9 code implementations NeurIPS 2018 Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry

Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs).

On the limitations of first order approximation in GAN dynamics

no code implementations ICLR 2018 Jerry Li, Aleksander Madry, John Peebles, Ludwig Schmidt

This suggests that such usage of the first order approximation of the discriminator, which is a de-facto standard in all the existing GAN dynamics, might be one of the factors that makes GAN training so challenging in practice.

A Classification-Based Perspective on GAN Distributions

no code implementations ICLR 2018 Shibani Santurkar, Ludwig Schmidt, Aleksander Madry

A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.

Classification General Classification

On the Limitations of First-Order Approximation in GAN Dynamics

no code implementations ICML 2018 Jerry Li, Aleksander Madry, John Peebles, Ludwig Schmidt

While Generative Adversarial Networks (GANs) have demonstrated promising performance on multiple vision tasks, their learning dynamics are not yet well understood, both in theory and in practice.

Towards Deep Learning Models Resistant to Adversarial Attacks

45 code implementations ICLR 2018 Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

Adversarial Attack Adversarial Defense +6

Cannot find the paper you are looking for? You can Submit a new open access paper.