no code implementations • 26 Dec 2023 • Ping-Yeh Chiang, Yipin Zhou, Omid Poursaeed, Satya Narayan Shukla, Ashish Shah, Tom Goldstein, Ser-Nam Lim
Recently, Pyramid Adversarial training (Herrmann et al., 2022) has been shown to be very effective for improving clean accuracy and distribution-shift robustness of vision transformers.
3 code implementations • 9 Oct 2023 • Neel Jain, Ping-Yeh Chiang, Yuxin Wen, John Kirchenbauer, Hong-Min Chu, Gowthami Somepalli, Brian R. Bartoldson, Bhavya Kailkhura, Avi Schwarzschild, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein
We show that language model finetuning can be improved, sometimes dramatically, with a simple augmentation.
1 code implementation • 1 Sep 2023 • Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-Yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein
We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
no code implementations • 23 Oct 2022 • Renkun Ni, Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Andrew Gordon Wilson, Tom Goldstein
Sharpness-Aware Minimization (SAM) has recently emerged as a robust technique for improving the accuracy of deep neural networks.
1 code implementation • 16 Jul 2022 • Arpit Bansal, Ping-Yeh Chiang, Michael Curry, Rajiv Jain, Curtis Wigington, Varun Manjunatha, John P Dickerson, Tom Goldstein
Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio.
1 code implementation • 25 Nov 2021 • Zeyad Ali Sami Emam, Hong-Min Chu, Ping-Yeh Chiang, Wojciech Czaja, Richard Leapman, Micah Goldblum, Tom Goldstein
Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset.
no code implementations • 29 Sep 2021 • Liam H Fowl, Ping-Yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Amit Bansal, Wojciech Czaja, Tom Goldstein
These two behaviors can be in conflict as an organization wants to prevent competitors from using their own data to replicate the performance of their proprietary models.
2 code implementations • NeurIPS 2021 • Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein
The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.
no code implementations • 16 Feb 2021 • Liam Fowl, Ping-Yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Bansal, Wojtek Czaja, Tom Goldstein
Large organizations such as social media companies continually release data, for example user images.
no code implementations • 1 Jan 2021 • Arpit Amit Bansal, Ping-Yeh Chiang, Michael Curry, Hossein Souri, Rama Chellappa, John P Dickerson, Rajiv Jain, Tom Goldstein
Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio.
no code implementations • ICLR 2021 • Renkun Ni, Hong-Min Chu, Oscar Castaneda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein
Low-precision neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.
no code implementations • NeurIPS 2020 • Ping-Yeh Chiang, Michael Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein
Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date.
no code implementations • 13 Oct 2020 • Kevin Kuo, Anthony Ostuni, Elizabeth Horishny, Michael J. Curry, Samuel Dooley, Ping-Yeh Chiang, Tom Goldstein, John P. Dickerson
Inspired by these advances, in this paper, we extend techniques for approximating auctions using deep learning to address concerns of fairness while maintaining high revenue and strong incentive guarantees.
no code implementations • 26 Jul 2020 • Renkun Ni, Hong-Min Chu, Oscar Castañeda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein
Low-resolution neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.
1 code implementation • 7 Jul 2020 • Ping-Yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein
While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive.
no code implementations • NeurIPS 2020 • Michael J. Curry, Ping-Yeh Chiang, Tom Goldstein, John Dickerson
We focus on the RegretNet architecture, which can represent auctions with arbitrary numbers of items and participants; it is trained to be empirically strategyproof, but the property is never exactly verified leaving potential loopholes for market participants to exploit.
1 code implementation • ICLR 2020 • Ping-Yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studer, Tom Goldstein
Adversarial patch attacks are among one of the most practical threat models against real-world computer vision systems.
no code implementations • 22 Feb 2020 • Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein
Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.
no code implementations • 18 Nov 2019 • Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi
State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.
no code implementations • 25 Sep 2019 • Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein
Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical (PGD) robustness.
no code implementations • 1 Feb 2019 • Angeline Aguinaldo, Ping-Yeh Chiang, Alex Gain, Ameya Patil, Kolten Pearson, Soheil Feizi
From our experiments, we observe a qualitative limit for GAN's compression.