Search Results for author: Ping-Yeh Chiang

Found 16 papers, 4 papers with code

Active Learning at the ImageNet Scale

1 code implementation25 Nov 2021 Zeyad Ali Sami Emam, Hong-Min Chu, Ping-Yeh Chiang, Wojciech Czaja, Richard Leapman, Micah Goldblum, Tom Goldstein

Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset.

Active Learning

Protecting Proprietary Data: Poisoning for Secure Dataset Release

no code implementations29 Sep 2021 Liam H Fowl, Ping-Yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Amit Bansal, Wojciech Czaja, Tom Goldstein

These two behaviors can be in conflict as an organization wants to prevent competitors from using their own data to replicate the performance of their proprietary models.

Data Poisoning

Adversarial Examples Make Strong Poisons

1 code implementation NeurIPS 2021 Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.

Data Poisoning

Certified Watermarks for Neural Networks

no code implementations1 Jan 2021 Arpit Amit Bansal, Ping-Yeh Chiang, Michael Curry, Hossein Souri, Rama Chellappa, John P Dickerson, Rajiv Jain, Tom Goldstein

Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio.

WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic

no code implementations ICLR 2021 Renkun Ni, Hong-Min Chu, Oscar Castaneda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein

Low-precision neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.

Quantization

ProportionNet: Balancing Fairness and Revenue for Auction Design with Deep Learning

no code implementations13 Oct 2020 Kevin Kuo, Anthony Ostuni, Elizabeth Horishny, Michael J. Curry, Samuel Dooley, Ping-Yeh Chiang, Tom Goldstein, John P. Dickerson

Inspired by these advances, in this paper, we extend techniques for approximating auctions using deep learning to address concerns of fairness while maintaining high revenue and strong incentive guarantees.

Fairness

WrapNet: Neural Net Inference with Ultra-Low-Resolution Arithmetic

no code implementations26 Jul 2020 Renkun Ni, Hong-Min Chu, Oscar Castañeda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein

Low-resolution neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.

Quantization

Detection as Regression: Certified Object Detection by Median Smoothing

1 code implementation7 Jul 2020 Ping-Yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein

While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive.

object-detection Object Detection

Certifying Strategyproof Auction Networks

no code implementations NeurIPS 2020 Michael J. Curry, Ping-Yeh Chiang, Tom Goldstein, John Dickerson

We focus on the RegretNet architecture, which can represent auctions with arbitrary numbers of items and participants; it is trained to be empirically strategyproof, but the property is never exactly verified leaving potential loopholes for market participants to exploit.

Certified Defenses for Adversarial Patches

1 code implementation ICLR 2020 Ping-Yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studer, Tom Goldstein

Adversarial patch attacks are among one of the most practical threat models against real-world computer vision systems.

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

no code implementations22 Feb 2020 Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.

WITCHcraft: Efficient PGD attacks with random step size

no code implementations18 Nov 2019 Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi

State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.

Improved Training of Certifiably Robust Models

no code implementations25 Sep 2019 Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical (PGD) robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.