Search Results for author: Dimitris Tsipras

Found 23 papers, 18 papers with code

Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO

2 code implementations25 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO).

reinforcement-learning Reinforcement Learning (RL)

Adversarial Examples Are Not Bugs, They Are Features

4 code implementations NeurIPS 2019 Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry

Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear.

BIG-bench Machine Learning

Adversarial Robustness as a Prior for Learned Representations

5 code implementations3 Jun 2019 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry

In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.

Adversarial Robustness

BREEDS: Benchmarks for Subpopulation Shift

2 code implementations ICLR 2021 Shibani Santurkar, Dimitris Tsipras, Aleksander Madry

We develop a methodology for assessing the robustness of models to subpopulation shift---specifically, their ability to generalize to novel data subpopulations that were not observed during training.

Robustness May Be at Odds with Accuracy

7 code implementations ICLR 2019 Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry

We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.

Adversarial Robustness

What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

2 code implementations1 Aug 2022 Shivam Garg, Dimitris Tsipras, Percy Liang, Gregory Valiant

To make progress towards understanding in-context learning, we consider the well-defined problem of training a model to in-context learn a function class (e. g., linear functions): that is, given data derived from some functions in the class, can we train a model to in-context learn "most" functions from this class?

In-Context Learning

Image Synthesis with a Single (Robust) Classifier

1 code implementation NeurIPS 2019 Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry

We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.

Ranked #60 on Image Generation on CIFAR-10 (Inception score metric)

Adversarial Robustness Image Generation

Implementation Matters in Deep RL: A Case Study on PPO and TRPO

2 code implementations ICLR 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms, Proximal Policy Optimization and Trust Region Policy Optimization.

reinforcement-learning Reinforcement Learning (RL)

How Does Batch Normalization Help Optimization?

11 code implementations NeurIPS 2018 Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry

Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs).

Editing a classifier by rewriting its prediction rules

1 code implementation NeurIPS 2021 Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry

We present a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules.

Label-Consistent Backdoor Attacks

1 code implementation5 Dec 2019 Alexander Turner, Dimitris Tsipras, Aleksander Madry

While such attacks are very effective, they crucially rely on the adversary injecting arbitrary inputs that are---often blatantly---mislabeled.

Identifying Statistical Bias in Dataset Replication

1 code implementation19 May 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for a standard human-in-the-loop measure of data quality.

Combining Diverse Feature Priors

1 code implementation15 Oct 2021 Saachi Jain, Dimitris Tsipras, Aleksander Madry

To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly.

Adversarially Robust Generalization Requires More Data

no code implementations NeurIPS 2018 Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry

We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.

General Classification Image Classification

A Closer Look at Deep Policy Gradients

no code implementations ICLR 2020 Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development.

Value prediction

Clean-Label Backdoor Attacks

no code implementations ICLR 2019 Alexander Turner, Dimitris Tsipras, Aleksander Madry

Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks.

Statistical Bias in Dataset Replication

no code implementations ICML 2020 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

Dataset replication is a useful tool for assessing whether models have overfit to a specific validation set or the exact circumstances under which it was generated.

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

no code implementations18 Dec 2020 Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein

As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance.

BIG-bench Machine Learning Data Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.