Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples

28 Jul 2019Hossein HosseiniSreeram KannanRadha Poovendran

Deep learning classifiers are known to be vulnerable to adversarial examples. A recent paper presented at ICML 2019 proposed a statistical test detection method based on the observation that logits of noisy adversarial examples are biased toward the true class... (read more)

PDF Abstract


No code implementations yet. Submit your code now


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.