Search Results for author: Guillaume Leclerc

Found 10 papers, 5 papers with code

Rethinking Backdoor Attacks

no code implementations19 Jul 2023 Alaa Khaddaj, Guillaume Leclerc, Aleksandar Makelov, Kristian Georgiev, Hadi Salman, Andrew Ilyas, Aleksander Madry

In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.

Backdoor Attack

FFCV: Accelerating Training by Removing Data Bottlenecks

2 code implementations CVPR 2023 Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry

For example, we are able to train an ImageNet ResNet-50 model to 75\% in only 20 mins on a single machine.

TRAK: Attributing Model Behavior at Scale

2 code implementations24 Mar 2023 Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry

That is, computationally tractable methods can struggle with accurately attributing model predictions in non-convex settings (e. g., in the context of deep neural networks), while methods that are effective in such regimes require training thousands of models, which makes them impractical for large models or datasets.

Raising the Cost of Malicious AI-Powered Image Editing

1 code implementation13 Feb 2023 Hadi Salman, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, Aleksander Madry

We present an approach to mitigating the risks of malicious image editing posed by large diffusion models.

Datamodels: Predicting Predictions from Training Data

1 code implementation1 Feb 2022 Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, Aleksander Madry

We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data.

3DB: A Framework for Debugging Computer Vision Models

1 code implementation7 Jun 2021 Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry

We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.

Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy

no code implementations26 Feb 2020 Aditya Saligrama, Guillaume Leclerc

A necessary characteristic for the deployment of deep learning models in real world applications is resistance to small adversarial perturbations while maintaining accuracy on non-malicious inputs.

The Two Regimes of Deep Network Training

no code implementations24 Feb 2020 Guillaume Leclerc, Aleksander Madry

Learning rate schedule has a major impact on the performance of deep learning models.

Vocal Bursts Valence Prediction

Smallify: Learning Network Size while Training

no code implementations10 Jun 2018 Guillaume Leclerc, Manasi Vartak, Raul Castro Fernandez, Tim Kraska, Samuel Madden

As neural networks become widely deployed in different applications and on different hardware, it has become increasingly important to optimize inference time and model size along with model accuracy.

Cannot find the paper you are looking for? You can Submit a new open access paper.