Search Results for author: Stephanie Olaiya

Found 1 papers, 0 papers with code

Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception

no code implementations5 Jun 2023 Drew Linsley, Pinyuan Feng, Thibaut Boissin, Alekh Karkada Ashok, Thomas Fel, Stephanie Olaiya, Thomas Serre

Harmonized DNNs achieve the best of both worlds and experience attacks that are detectable and affect features that humans find diagnostic for recognition, meaning that attacks on these models are more likely to be rendered ineffective by inducing similar effects on human perception.

Adversarial Attack Adversarial Robustness +2

Cannot find the paper you are looking for? You can Submit a new open access paper.