1 code implementation • 11 Dec 2023 • Kristian Georgiev, Joshua Vendrow, Hadi Salman, Sung Min Park, Aleksander Madry
Then, we provide a method for computing these attributions efficiently.
2 code implementations • CVPR 2023 • Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry
For example, we are able to train an ImageNet ResNet-50 model to 75\% in only 20 mins on a single machine.
2 code implementations • 24 Mar 2023 • Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry
That is, computationally tractable methods can struggle with accurately attributing model predictions in non-convex settings (e. g., in the context of deep neural networks), while methods that are effective in such regimes require training thousands of models, which makes them impractical for large models or datasets.
1 code implementation • 22 Nov 2022 • Harshay Shah, Sung Min Park, Andrew Ilyas, Aleksander Madry
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms.
1 code implementation • CVPR 2023 • Saachi Jain, Hadi Salman, Alaa Khaddaj, Eric Wong, Sung Min Park, Aleksander Madry
It is commonly believed that in transfer learning including more pre-training data translates into better performance.
1 code implementation • 1 Feb 2022 • Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, Aleksander Madry
We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data.
no code implementations • 31 Dec 2021 • Sung Min Park, Kuo-An Wei, Kai Xiao, Jerry Li, Aleksander Madry
We identify properties of universal adversarial perturbations (UAPs) that distinguish them from standard adversarial perturbations.
no code implementations • 1 Jan 2021 • Sung Min Park, Kuo-An Wei, Kai Yuanqing Xiao, Jerry Li, Aleksander Madry
We study universal adversarial perturbations and demonstrate that the above picture is more nuanced.
no code implementations • NeurIPS 2018 • Guy Bresler, Sung Min Park, Madalina Persu
Sparse Principal Component Analysis (SPCA) and Sparse Linear Regression (SLR) have a wide range of applications and have attracted a tremendous amount of attention in the last two decades as canonical examples of statistical problems in high dimension.