Search Results for author: Gabriela Ciocarlie

Found 3 papers, 2 papers with code

Adversarial Scratches: Deployable Attacks to CNN Classifiers

1 code implementation20 Apr 2022 Loris Giulivi, Malhar Jere, Loris Rossi, Farinaz Koushanfar, Gabriela Ciocarlie, Briland Hitaj, Giacomo Boracchi

We present Adversarial Scratches: a novel L0 black-box attack, which takes the form of scratches in images, and which possesses much greater deployability than other state-of-the-art attacks.

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

1 code implementation5 Dec 2019 Malhar Jere, Loris Rossi, Briland Hitaj, Gabriela Ciocarlie, Giacomo Boracchi, Farinaz Koushanfar

We study black-box adversarial attacks for image classifiers in a constrained threat model, where adversaries can only modify a small fraction of pixels in the form of scratches on an image.

Adversarial Attack Image Captioning +1

ct-fuzz: Fuzzing for Timing Leaks

no code implementations15 Apr 2019 Shaobo He, Michael Emmi, Gabriela Ciocarlie

Testing-based methodologies like fuzzing are able to analyze complex software which is not amenable to traditional formal approaches like verification, model checking, and abstract interpretation.

Software Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.