1 code implementation • NeurIPS 2021 • Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yiğitcan Kaya, Tudor Dumitraş
Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger.
1 code implementation • ICLR 2021 • Sanghyun Hong, Yiğitcan Kaya, Ionuţ-Vlad Modoranu, Tudor Dumitraş
We show that a slowdown attack reduces the efficacy of multi-exit DNNs by 90-100%, and it amplifies the latency by 1. 5-5$\times$ in a typical IoT deployment.
1 code implementation • 26 Feb 2020 • Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot
In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks.
1 code implementation • 17 Feb 2020 • Sanghyun Hong, Michael Davinroy, Yiğitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş
This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service, the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels.
no code implementations • 3 Jun 2019 • Sanghyun Hong, Pietro Frigo, Yiğitcan Kaya, Cristiano Giuffrida, Tudor Dumitraş
Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e. g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy.
no code implementations • 19 Mar 2018 • Octavian Suciu, Radu Mărginean, Yiğitcan Kaya, Hal Daumé III, Tudor Dumitraş
Our model allows us to consider a wide range of weaker adversaries who have limited control and incomplete knowledge of the features, learning algorithms and training instances utilized.