Paper

AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients

Deep neural networks have been shown to be vulnerable to adversarial examples---maliciously crafted examples that can trigger the target model to misbehave by adding imperceptible perturbations. Existing attack methods for k-nearest neighbor~(kNN) based algorithms either require large perturbations or are not applicable for large k. To handle this problem, this paper proposes a new method called AdvKNN for evaluating the adversarial robustness of kNN-based models... (read more)

Results in Papers With Code
(↓ scroll down to see all results)