Search Results for author: Ping-Yeh Chiang

Found 21 papers, 7 papers with code

Improved Training of Certifiably Robust Models

no code implementations25 Sep 2019 Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical (PGD) robustness.

WITCHcraft: Efficient PGD attacks with random step size

no code implementations18 Nov 2019 Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi

State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.

Computational Efficiency

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

no code implementations22 Feb 2020 Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.

Certified Defenses for Adversarial Patches

1 code implementation ICLR 2020 Ping-Yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studer, Tom Goldstein

Adversarial patch attacks are among one of the most practical threat models against real-world computer vision systems.

Certifying Strategyproof Auction Networks

no code implementations NeurIPS 2020 Michael J. Curry, Ping-Yeh Chiang, Tom Goldstein, John Dickerson

We focus on the RegretNet architecture, which can represent auctions with arbitrary numbers of items and participants; it is trained to be empirically strategyproof, but the property is never exactly verified leaving potential loopholes for market participants to exploit.

Detection as Regression: Certified Object Detection by Median Smoothing

1 code implementation7 Jul 2020 Ping-Yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein

While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive.

Object object-detection +2

WrapNet: Neural Net Inference with Ultra-Low-Resolution Arithmetic

no code implementations26 Jul 2020 Renkun Ni, Hong-Min Chu, Oscar Castañeda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein

Low-resolution neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.

Quantization

ProportionNet: Balancing Fairness and Revenue for Auction Design with Deep Learning

no code implementations13 Oct 2020 Kevin Kuo, Anthony Ostuni, Elizabeth Horishny, Michael J. Curry, Samuel Dooley, Ping-Yeh Chiang, Tom Goldstein, John P. Dickerson

Inspired by these advances, in this paper, we extend techniques for approximating auctions using deep learning to address concerns of fairness while maintaining high revenue and strong incentive guarantees.

Fairness

Certified Watermarks for Neural Networks

no code implementations1 Jan 2021 Arpit Amit Bansal, Ping-Yeh Chiang, Michael Curry, Hossein Souri, Rama Chellappa, John P Dickerson, Rajiv Jain, Tom Goldstein

Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio.

WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic

no code implementations ICLR 2021 Renkun Ni, Hong-Min Chu, Oscar Castaneda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein

Low-precision neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.

Quantization

Adversarial Examples Make Strong Poisons

2 code implementations NeurIPS 2021 Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.

Data Poisoning

Protecting Proprietary Data: Poisoning for Secure Dataset Release

no code implementations29 Sep 2021 Liam H Fowl, Ping-Yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Amit Bansal, Wojciech Czaja, Tom Goldstein

These two behaviors can be in conflict as an organization wants to prevent competitors from using their own data to replicate the performance of their proprietary models.

Data Poisoning

Active Learning at the ImageNet Scale

1 code implementation25 Nov 2021 Zeyad Ali Sami Emam, Hong-Min Chu, Ping-Yeh Chiang, Wojciech Czaja, Richard Leapman, Micah Goldblum, Tom Goldstein

Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset.

Active Learning

Certified Neural Network Watermarks with Randomized Smoothing

1 code implementation16 Jul 2022 Arpit Bansal, Ping-Yeh Chiang, Michael Curry, Rajiv Jain, Curtis Wigington, Varun Manjunatha, John P Dickerson, Tom Goldstein

Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio.

K-SAM: Sharpness-Aware Minimization at the Speed of SGD

no code implementations23 Oct 2022 Renkun Ni, Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Andrew Gordon Wilson, Tom Goldstein

Sharpness-Aware Minimization (SAM) has recently emerged as a robust technique for improving the accuracy of deep neural networks.

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

1 code implementation1 Sep 2023 Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-Yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein

We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.

Universal Pyramid Adversarial Training for Improved ViT Performance

no code implementations26 Dec 2023 Ping-Yeh Chiang, Yipin Zhou, Omid Poursaeed, Satya Narayan Shukla, Ashish Shah, Tom Goldstein, Ser-Nam Lim

Recently, Pyramid Adversarial training (Herrmann et al., 2022) has been shown to be very effective for improving clean accuracy and distribution-shift robustness of vision transformers.

Cannot find the paper you are looking for? You can Submit a new open access paper.