no code implementations • 19 Jun 2019 • Brandon Tran, Maryam Karimzadehgan, Rama Kumar Pasumarthi, Michael Bendersky, Donald Metzler
To address this data challenge, in this paper we propose a domain adaptation approach that fine-tunes the global model to each individual enterprise.
1 code implementation • NeurIPS 2019 • Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry
We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.
Ranked #63 on Image Generation on CIFAR-10 (Inception score metric)
5 code implementations • 3 Jun 2019 • Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry
In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.
4 code implementations • NeurIPS 2019 • Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear.
1 code implementation • NeurIPS 2018 • Brandon Tran, Jerry Li, Aleksander Madry
In this paper, we identify a new property of all known backdoor attacks, which we call \emph{spectral signatures}.
2 code implementations • 7 Dec 2017 • Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, Aleksander Madry
The study of adversarial robustness has so far largely focused on perturbations bound in p-norms.