Search Results for author: Azer Khan

Found 2 papers, 0 papers with code

Thwarting finite difference adversarial attacks with output randomization

no code implementations ICLR 2020 Haidar Khan, Daniel Park, Azer Khan, Bülent Yener

Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model and to the opposite "black box" setting.

Adversarial Attack

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

no code implementations8 Jul 2021 Daniel Park, Haidar Khan, Azer Khan, Alex Gittens, Bülent Yener

Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model in a "white box" setting and to the opposite in a "black box" setting.

Cannot find the paper you are looking for? You can Submit a new open access paper.