no code implementations • 21 Nov 2024 • Eden Luzon, Guy Amit, Roy Weiss, Yisroel Mirsky
What makes this attack unique is that it (1) works even when the tasks conflict (making a classifier output images), (2) enables the systematic extraction of training samples from deployed models and (3) offers guarantees on the extracted authenticity of the data.
no code implementations • 20 Oct 2024 • Maor Biton Dor, Yisroel Mirsky
This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness.
no code implementations • 20 Oct 2024 • Daniel Ayzenshteyn, Roy Weiss, Yisroel Mirsky
As large language models (LLMs) continue to evolve, their potential use in automating cyberattacks becomes increasingly likely.
no code implementations • 20 Oct 2024 • Bar Avraham, Yisroel Mirsky
We then evaluate the transferability of adversarial perturbations on these images using a set of substitute models.
no code implementations • 12 Oct 2024 • Gilad Gressel, Rahul Pankajakshan, Yisroel Mirsky
Our evaluation of 9 leading models from the LMSYS leaderboard revealed that explicit challenges successfully detected LLMs in 78. 4% of cases, while implicit challenges were effective in 22. 9% of instances.
1 code implementation • 21 Jul 2024 • Fred Grabovski, Lior Yasur, Guy Amit, Yisroel Mirsky
Recent progress in generative models has made it easier for a wide audience to edit and create image content, raising concerns about the proliferation of deepfakes, especially in healthcare.
1 code implementation • AAAI Conference on Artificial Intelligence 2024 • Seffi Cohen, Ofir Arbili, Yisroel Mirsky, Lior Rokach
Without the presence of any attacks, TTTS has successfully improved model performance from an AUC of 0. 714 to 0. 773.
no code implementations • 14 Mar 2024 • Roey Bokobza, Yisroel Mirsky
Our paper presents a novel defence against black box attacks, where attackers use the victim model as an oracle to craft their adversarial examples.
1 code implementation • 14 Mar 2024 • Roy Weiss, Daniel Ayzenshteyn, Guy Amit, Yisroel Mirsky
In this paper, we unveil a novel side-channel that can be used to read encrypted responses from AI Assistants over the web: the token-length side-channel.
1 code implementation • 13 Nov 2023 • Guy Amit, Mosh Levy, Yisroel Mirsky
In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets.
no code implementations • 4 Jun 2023 • Guy Frankovits, Yisroel Mirsky
Generative deep learning models are able to create realistic audio and video.
no code implementations • 8 Jan 2023 • Lior Yasur, Guy Frankovits, Fred M. Grabovski, Yisroel Mirsky
In this work we focus on real-time audio deepfakes and present preliminary results on video.
2 code implementations • 23 Aug 2022 • Mosh Levy, Guy Amit, Yuval Elovici, Yisroel Mirsky
By leveraging a set of diverse surrogate models, our method can predict transferability of adversarial examples.
no code implementations • 17 Aug 2022 • Yisroel Mirsky
In this paper, we propose a lightweight application which can protect organizations and individuals from deepfake SE attacks.
no code implementations • 21 Jan 2022 • Moshe Levy, Guy Amit, Yuval Elovici, Yisroel Mirsky
Deep learning has shown great promise in the domain of medical image analysis.
no code implementations • 30 Jun 2021 • Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, Battista Biggio
Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations.
no code implementations • 30 Apr 2021 • Yisroel Mirsky
In this paper, we introduce a new type of adversarial patch which alters a model's perception of an image's semantics.
1 code implementation • 18 Jun 2020 • Yisroel Mirsky, Tomer Golomb, Yuval Elovici
Due to their rapid growth and deployment, the Internet of things (IoT) have become a central aspect of our daily lives.
no code implementations • 23 Apr 2020 • Yisroel Mirsky, Wenke Lee
Generative deep learning algorithms have progressed to a point where it is difficult to tell the difference between what is real and what is fake.
no code implementations • 5 Mar 2020 • Dvir Cohen, Yisroel Mirsky, Yuval Elovici, Rami Puzis, Manuel Kamp, Tobias Martin, Asaf Shabtai
In this paper, we present DANTE: a framework and algorithm for mining darknet traffic.
1 code implementation • 18 Oct 2019 • Yisroel Mirsky, Benjamin Fedidat, Yoram Haddad
In this paper, we present the Vernam Physical Signal Cipher (VPSC): a novel cipher which can encrypt the harmonic composition of any analog waveform.
Cryptography and Security
no code implementations • 13 Mar 2019 • Eran Fainman, Bracha Shapira, Lior Rokach, Yisroel Mirsky
In online learning, the challenge is to find the optimum set of features to be acquired from each instance upon arrival from a data stream.
1 code implementation • 11 Jan 2019 • Yisroel Mirsky, Tom Mahler, Ilan Shelef, Yuval Elovici
In this paper, we show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans.
2 code implementations • 9 May 2018 • Yair Meidan, Michael Bohadana, Yael Mathov, Yisroel Mirsky, Dominik Breitenbacher, Asaf Shabtai, Yuval Elovici
The proliferation of IoT devices which can be more easily compromised than desktop computers has led to an increase in the occurrence of IoT based botnet attacks.
no code implementations • 10 Mar 2018 • Tomer Golomb, Yisroel Mirsky, Yuval Elovici
However, an anomaly detection model must be trained for a long time in order to capture all benign behaviors.
3 code implementations • 25 Feb 2018 • Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, Asaf Shabtai
In this paper, we present Kitsune: a plug and play NIDS which can learn to detect attacks on the local network, without supervision, and in an efficient online manner.