no code implementations • 18 Apr 2024 • Raz Lapid, Almog Dubin, Moshe Sipper
This paper presents RADAR-Robust Adversarial Detection via Adversarial Retraining-an approach designed to enhance the robustness of adversarial detectors against adaptive attacks, while maintaining classifier performance.
no code implementations • 5 Mar 2024 • Ben Pinhasov, Raz Lapid, Rony Ohayon, Moshe Sipper, Yehudit Aperstein
Furthermore, this approach does not change the performance of the deepfake detector.
no code implementations • 4 Sep 2023 • Raz Lapid, Ron Langberg, Moshe Sipper
The GA attack works by optimizing a universal adversarial prompt that -- when combined with a user's query -- disrupts the attacked model's alignment, resulting in unintended and potentially harmful outputs.
no code implementations • 13 Jun 2023 • Raz Lapid, Moshe Sipper
Through experiments conducted on the ViT-GPT2 model, which is the most-used image-to-text model in Hugging Face, and the Flickr30k dataset, we demonstrate that our proposed attack successfully generates visually similar adversarial examples, both with untargeted and targeted captions.
no code implementations • 8 Jun 2023 • Moshe Sipper, Achiya Elyasaf, Tomer Halperin, Zvika Haramaty, Raz Lapid, Eyal Segal, Itai Tzruia, Snir Vitrack Tamam
We survey eight recent works by our group, involving the successful blending of evolutionary algorithms with machine learning and deep learning: 1.
no code implementations • 7 Mar 2023 • Raz Lapid, Eylon Mizrahi, Moshe Sipper
To our knowledge this is the first and only method that performs black-box physical attacks directly on object-detection models, which results with a model-agnostic attack.
1 code implementation • 27 Nov 2022 • Snir Vitrack Tamam, Raz Lapid, Moshe Sipper
Our novel algorithm, AttaXAI, a model-agnostic, adversarial attack on XAI algorithms, only requires access to the output logits of a classifier and to the explanation map; these weak assumptions render our approach highly useful where real-world models and data are concerned.
no code implementations • 17 Aug 2022 • Raz Lapid, Zvika Haramaty, Moshe Sipper
Deep neural networks (DNNs) are sensitive to adversarial data in a variety of scenarios, including the black-box scenario, where the attacker is only allowed to query the trained model and receive an output.
no code implementations • 24 Jun 2022 • Raz Lapid, Moshe Sipper
Studying both standard fully connected neural networks (FCNs) and convolutional neural networks (CNNs), we propose a novel, three-population, coevolutionary algorithm to evolve AFs, and compare it to four other methods, both evolutionary and non-evolutionary.