no code implementations • 19 Jul 2023 • Alaa Khaddaj, Guillaume Leclerc, Aleksandar Makelov, Kristian Georgiev, Hadi Salman, Andrew Ilyas, Aleksander Madry
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
2 code implementations • CVPR 2023 • Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry
For example, we are able to train an ImageNet ResNet-50 model to 75\% in only 20 mins on a single machine.
2 code implementations • 24 Mar 2023 • Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry
That is, computationally tractable methods can struggle with accurately attributing model predictions in non-convex settings (e. g., in the context of deep neural networks), while methods that are effective in such regimes require training thousands of models, which makes them impractical for large models or datasets.
1 code implementation • 13 Feb 2023 • Hadi Salman, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, Aleksander Madry
We present an approach to mitigating the risks of malicious image editing posed by large diffusion models.
no code implementations • 19 Jun 2022 • Chong Guo, Michael J. Lee, Guillaume Leclerc, Joel Dapello, Yug Rao, Aleksander Madry, James J. DiCarlo
Visual systems of primates are the gold standard of robust perception.
1 code implementation • 1 Feb 2022 • Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, Aleksander Madry
We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data.
1 code implementation • 7 Jun 2021 • Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry
We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.
no code implementations • 26 Feb 2020 • Aditya Saligrama, Guillaume Leclerc
A necessary characteristic for the deployment of deep learning models in real world applications is resistance to small adversarial perturbations while maintaining accuracy on non-malicious inputs.
no code implementations • 24 Feb 2020 • Guillaume Leclerc, Aleksander Madry
Learning rate schedule has a major impact on the performance of deep learning models.
no code implementations • 10 Jun 2018 • Guillaume Leclerc, Manasi Vartak, Raul Castro Fernandez, Tim Kraska, Samuel Madden
As neural networks become widely deployed in different applications and on different hardware, it has become increasingly important to optimize inference time and model size along with model accuracy.