Search Results for author: Manvel Gasparyan

Found 2 papers, 0 papers with code

Exact Feature Collisions in Neural Networks

no code implementations31 May 2022 Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem

Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples.

Perturbation Analysis of Gradient-based Adversarial Attacks

no code implementations2 Jun 2020 Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem

Our experiments reveal that the Iterative Fast Gradient Sign attack, which is thought to be fast for generating adversarial examples, is the worst attack in terms of the number of iterations required to create adversarial examples in the setting of equal perturbation.

Cannot find the paper you are looking for? You can Submit a new open access paper.