Search Results for author: Yiğitcan Kaya

Found 6 papers, 4 papers with code

Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes

1 code implementation NeurIPS 2021 Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yiğitcan Kaya, Tudor Dumitraş

Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger.

Backdoor Attack Federated Learning +1

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

1 code implementation ICLR 2021 Sanghyun Hong, Yiğitcan Kaya, Ionuţ-Vlad Modoranu, Tudor Dumitraş

We show that a slowdown attack reduces the efficacy of multi-exit DNNs by 90-100%, and it amplifies the latency by 1. 5-5$\times$ in a typical IoT deployment.

Image Classification

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

1 code implementation26 Feb 2020 Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot

In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks.

Data Poisoning

How to 0wn NAS in Your Spare Time

1 code implementation17 Feb 2020 Sanghyun Hong, Michael Davinroy, Yiğitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş

This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service, the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels.

Malware Detection Neural Architecture Search

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

no code implementations3 Jun 2019 Sanghyun Hong, Pietro Frigo, Yiğitcan Kaya, Cristiano Giuffrida, Tudor Dumitraş

Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e. g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy.

General Classification Image Classification

Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

no code implementations19 Mar 2018 Octavian Suciu, Radu Mărginean, Yiğitcan Kaya, Hal Daumé III, Tudor Dumitraş

Our model allows us to consider a wide range of weaker adversaries who have limited control and incomplete knowledge of the features, learning algorithms and training instances utilized.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.