Search Results for author: Logan Engstrom

Found 25 papers, 23 papers with code

Statistical Bias in Dataset Replication

no code implementations ICML 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

Dataset replication is a useful tool for assessing whether models have overfit to a specific validation set or the exact circumstances under which it was generated.

DsDm: Model-Aware Dataset Selection with Datamodels

1 code implementation23 Jan 2024 Logan Engstrom, Axel Feldmann, Aleksander Madry

When selecting data for training large-scale models, standard practice is to filter for examples that match human notions of data quality.

Language Modelling

FFCV: Accelerating Training by Removing Data Bottlenecks

2 code implementations CVPR 2023 Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry

For example, we are able to train an ImageNet ResNet-50 model to 75\% in only 20 mins on a single machine.

Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation

1 code implementation15 Feb 2023 Joshua Vendrow, Saachi Jain, Logan Engstrom, Aleksander Madry

In this work, we introduce the notion of a dataset interface: a framework that, given an input dataset and a user-specified shift, returns instances from that input distribution that exhibit the desired shift.

counterfactual

When does Bias Transfer in Transfer Learning?

1 code implementation6 Jul 2022 Hadi Salman, Saachi Jain, Andrew Ilyas, Logan Engstrom, Eric Wong, Aleksander Madry

Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside.

Transfer Learning

Datamodels: Predicting Predictions from Training Data

1 code implementation1 Feb 2022 Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, Aleksander Madry

We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data.

3DB: A Framework for Debugging Computer Vision Models

1 code implementation7 Jun 2021 Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry

We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.

Unadversarial Examples: Designing Objects for Robust Vision

2 code implementations NeurIPS 2021 Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, Ashish Kapoor

We study a class of realistic computer vision settings wherein one can influence the design of the objects being recognized.

BIG-bench Machine Learning

Do Adversarially Robust ImageNet Models Transfer Better?

2 code implementations NeurIPS 2020 Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry

Typically, better pre-trained models yield better transfer results, suggesting that initial accuracy is a key aspect of transfer learning performance.

Transfer Learning

Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO

2 code implementations25 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO).

reinforcement-learning Reinforcement Learning (RL)

Identifying Statistical Bias in Dataset Replication

1 code implementation19 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for a standard human-in-the-loop measure of data quality.

Implementation Matters in Deep RL: A Case Study on PPO and TRPO

2 code implementations ICLR 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms, Proximal Policy Optimization and Trust Region Policy Optimization.

reinforcement-learning Reinforcement Learning (RL)

Image Synthesis with a Single (Robust) Classifier

1 code implementation NeurIPS 2019 Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry

We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.

Ranked #60 on Image Generation on CIFAR-10 (Inception score metric)

Adversarial Robustness Image Generation

Adversarial Robustness as a Prior for Learned Representations

5 code implementations3 Jun 2019 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry

In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.

Adversarial Robustness

Adversarial Examples Are Not Bugs, They Are Features

4 code implementations NeurIPS 2019 Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry

Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear.

BIG-bench Machine Learning

A Closer Look at Deep Policy Gradients

no code implementations ICLR 2020 Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development.

Value prediction

Evaluating and Understanding the Robustness of Adversarial Logit Pairing

1 code implementation26 Jul 2018 Logan Engstrom, Andrew Ilyas, Anish Athalye

We evaluate the robustness of Adversarial Logit Pairing, a recently proposed defense against adversarial examples.

Adversarial Attack

Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors

3 code implementations ICLR 2019 Andrew Ilyas, Logan Engstrom, Aleksander Madry

We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available.

Robustness May Be at Odds with Accuracy

7 code implementations ICLR 2019 Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry

We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.

Adversarial Robustness

Black-box Adversarial Attacks with Limited Queries and Information

2 code implementations ICML 2018 Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin

Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model.

Query-Efficient Black-box Adversarial Examples (superceded)

1 code implementation19 Dec 2017 Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin

Second, we introduce a new algorithm to perform targeted adversarial attacks in the partial-information setting, where the attacker only has access to a limited number of target classes.

Adversarial Attack

Synthesizing Robust Adversarial Examples

3 code implementations24 Jul 2017 Anish Athalye, Logan Engstrom, Andrew Ilyas, Kevin Kwok

We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations.

Cannot find the paper you are looking for? You can Submit a new open access paper.