Search Results for author: Asaf Shabtai

Found 65 papers, 8 papers with code

DOMBA: Double Model Balancing for Access-Controlled Language Models via Minimum-Bounded Aggregation

no code implementations20 Aug 2024 Tom Segal, Asaf Shabtai, Yuval Elovici

A straightforward approach for preventing such exposure is to train a separate model for each access level.

Detection of Compromised Functions in a Serverless Cloud Environment

no code implementations5 Aug 2024 Danielle Lavi, Oleg Brodt, Dudu Mimran, Yuval Elovici, Asaf Shabtai

To evaluate our model's performance, we developed a serverless cybersecurity testbed in an AWS cloud environment, which includes two different serverless applications and simulates a variety of attack scenarios that cover the main security threats faced by serverless functions.

LLMCloudHunter: Harnessing LLMs for Automated Extraction of Detection Rules from Cloud-Based CTI

no code implementations6 Jul 2024 Yuval Schwartz, Lavi Benshimol, Dudu Mimran, Yuval Elovici, Asaf Shabtai

As the number and sophistication of cyber attacks have increased, threat hunting has become a critical aspect of active security, enabling proactive detection and mitigation of threats before they cause significant harm.

RAPID: Robust APT Detection and Investigation Using Context-Aware Deep Learning

no code implementations8 Jun 2024 Yonatan Amaru, Prasanna Wudali, Yuval Elovici, Asaf Shabtai

Advanced persistent threats (APTs) pose significant challenges for organizations, leading to data breaches, financial losses, and reputational damage.

Anomaly Detection Computational Efficiency

GenKubeSec: LLM-Based Kubernetes Misconfiguration Detection, Localization, Reasoning, and Remediation

no code implementations30 May 2024 Ehud Malul, Yair Meidan, Dudu Mimran, Yuval Elovici, Asaf Shabtai

In this paper, we propose GenKubeSec, a comprehensive and adaptive, LLM-based method, which, in addition to detecting a wide variety of KCF misconfigurations, also identifies the exact location of the misconfigurations and provides detailed reasoning about them, along with suggested remediation.

CodeCloak: A Method for Evaluating and Mitigating Code Leakage by LLM Code Assistants

no code implementations13 Apr 2024 Amit Finkman, Eden Bar-Kochva, Avishag Shapira, Dudu Mimran, Yuval Elovici, Asaf Shabtai

While beneficial, these tools might inadvertently expose the developer's proprietary code to the code assistant service provider during the development process.

Prompted Contextual Vectors for Spear-Phishing Detection

1 code implementation13 Feb 2024 Daniel Nahmias, Gal Engelberg, Dan Klein, Asaf Shabtai

Spear-phishing attacks present a significant security challenge, with large language models (LLMs) escalating the threat by generating convincing emails and facilitating target reconnaissance.

Document Classification

DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers

no code implementations4 Feb 2024 Oryan Yehezkel, Alon Zolfi, Amit Baras, Yuval Elovici, Asaf Shabtai

In this paper, we present DeSparsify, an attack targeting the availability of vision transformers that use token sparsification mechanisms.

Adversarial Attack Image Classification +2

GPT in Sheep's Clothing: The Risk of Customized GPTs

no code implementations17 Jan 2024 Sagiv Antebi, Noam Azulay, Edan Habler, Ben Ganon, Asaf Shabtai, Yuval Elovici

In November 2023, OpenAI introduced a new service allowing users to create custom versions of ChatGPT (GPTs) by using specific instructions and knowledge to guide the model's behavior.

QuantAttack: Exploiting Dynamic Quantization to Attack Vision Transformers

no code implementations3 Dec 2023 Amit Baras, Alon Zolfi, Yuval Elovici, Asaf Shabtai

However, their dynamic behavior and average-case performance assumption makes them vulnerable to a novel threat vector -- adversarial attacks that target the model's efficiency and availability.

Quantization

Detecting Anomalous Network Communication Patterns Using Graph Convolutional Networks

no code implementations30 Nov 2023 Yizhak Vaisman, Gilad Katz, Yuval Elovici, Asaf Shabtai

To protect an organizations' endpoints from sophisticated cyberattacks, advanced detection methods are required.

X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

no code implementations14 Jun 2023 Omer Hofman, Amit Giloni, Yarin Hayun, Ikuya Morikawa, Toshiya Shimizu, Yuval Elovici, Asaf Shabtai

X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the COCO dataset and our new Superstore dataset.

Decision Making Object +2

ReMark: Receptive Field based Spatial WaterMark Embedding Optimization using Deep Network

no code implementations11 May 2023 Natan Semyonov, Rami Puzis, Asaf Shabtai, Gilad Katz

Watermarking is one of the most important copyright protection tools for digital media.

CADeSH: Collaborative Anomaly Detection for Smart Homes

no code implementations2 Mar 2023 Yair Meidan, Dan Avraham, Hanan Libhaber, Asaf Shabtai

To overcome this, we propose a two-step collaborative anomaly detection method which first uses an autoencoder to differentiate frequent (`benign') and infrequent (possibly `malicious') traffic flows.

Anomaly Detection Intrusion Detection +1

YolOOD: Utilizing Object Detection Concepts for Multi-Label Out-of-Distribution Detection

no code implementations CVPR 2024 Alon Zolfi, Guy Amit, Amit Baras, Satoru Koda, Ikuya Morikawa, Yuval Elovici, Asaf Shabtai

In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task.

Classification Multi-class Classification +6

Latent SHAP: Toward Practical Human-Interpretable Explanations

no code implementations27 Nov 2022 Ron Bitton, Alon Malach, Amiel Meiseles, Satoru Momiyama, Toshinori Araki, Jun Furukawa, Yuval Elovici, Asaf Shabtai

Model agnostic feature attribution algorithms (such as SHAP and LIME) are ubiquitous techniques for explaining the decisions of complex classification models, such as deep neural networks.

Classification

Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models

no code implementations24 Nov 2022 Jacob Shams, Ben Nassi, Ikuya Morikawa, Toshiya Shimizu, Asaf Shabtai, Yuval Elovici

In this paper, we present an adaptive framework to watermark a protected model, leveraging the unique behavior present in the model due to a unique random seed initialized during the model training.

Model extraction

Improving Interpretability via Regularization of Neural Activation Sensitivity

no code implementations16 Nov 2022 Ofir Moshe, Gil Fidel, Ron Bitton, Asaf Shabtai

We evaluate the interpretability of models trained using our method to that of standard models and models trained using state-of-the-art adversarial robustness techniques.

Adversarial Robustness Explanation Fidelity Evaluation +1

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

no code implementations16 Nov 2022 Avishag Shapira, Ron Bitton, Dan Avraham, Alon Zolfi, Yuval Elovici, Asaf Shabtai

However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.

Object

A Transferable and Automatic Tuning of Deep Reinforcement Learning for Cost Effective Phishing Detection

no code implementations19 Sep 2022 Orel Lavie, Asaf Shabtai, Gilad Katz

Many challenging real-world problems require the deployment of ensembles multiple complementary learning models to reach acceptable performance levels.

Reinforcement Learning (RL)

Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors

1 code implementation26 May 2022 Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai

Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years.

Autonomous Driving Object +2

Adversarial Machine Learning Threat Analysis and Remediation in Open Radio Access Network (O-RAN)

no code implementations16 Jan 2022 Edan Habler, Ron Bitton, Dan Avraham, Dudu Mimran, Eitan Klevansky, Oleg Brodt, Heiko Lehmann, Yuval Elovici, Asaf Shabtai

Next, we explore the various AML threats associated with O-RAN and review a large number of attacks that can be performed to realize these threats and demonstrate an AML attack on a traffic steering model.

Anomaly Detection BIG-bench Machine Learning

Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Model

1 code implementation21 Nov 2021 Alon Zolfi, Shai Avidan, Yuval Elovici, Asaf Shabtai

In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets.

Face Recognition Real-World Adversarial Attack

Dodging Attack Using Carefully Crafted Natural Makeup

no code implementations14 Sep 2021 Nitzan Guetta, Asaf Shabtai, Inderjeet Singh, Satoru Momiyama, Yuval Elovici

Deep learning face recognition models are used by state-of-the-art surveillance systems to identify individuals passing through public areas (e. g., airports).

Face Recognition

Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems

no code implementations5 Jul 2021 Ron Bitton, Nadav Maman, Inderjeet Singh, Satoru Momiyama, Yuval Elovici, Asaf Shabtai

Using the extension, security practitioners can apply attack graph analysis methods in environments that include ML components; thus, providing security practitioners with a methodological and practical tool for evaluating the impact and quantifying the risk of a cyberattack targeting an ML production system.

BIG-bench Machine Learning Graph Generation

RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks

no code implementations13 Jun 2021 Shai Cohen, Efrat Levy, Avi Shaked, Tair Cohen, Yuval Elovici, Asaf Shabtai

The proposed technique, which allows the detection of malicious manipulation of critical fields in the data stream, is complemented by a timing-interval anomaly detection mechanism proposed for the detection of message dropping attempts.

Anomaly Detection

TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack

no code implementations10 Mar 2021 Yam Sharon, David Berend, Yang Liu, Asaf Shabtai, Yuval Elovici

Prior research on bypassing NIDSs has mainly focused on perturbing the features extracted from the attack traffic to fool the detection system, however, this may jeopardize the attack's functionality.

Network Intrusion Detection

The Translucent Patch: A Physical and Universal Attack on Object Detectors

no code implementations CVPR 2021 Alon Zolfi, Moshe Kravchik, Yuval Elovici, Asaf Shabtai

Therefore, in our experiments, which are conducted on state-of-the-art object detection models used in autonomous driving, we study the effect of the patch on the detection of both the selected target class and the other classes.

Autonomous Driving Object +2

BENN: Bias Estimation Using Deep Neural Network

no code implementations23 Dec 2020 Amit Giloni, Edita Grolman, Tanja Hagemann, Ronald Fromm, Sebastian Fischer, Yuval Elovici, Asaf Shabtai

The need to detect bias in machine learning (ML) models has led to the development of multiple bias detection methods, yet utilizing them is challenging since each method: i) explores a different ethical aspect of bias, which may result in contradictory output among the different methods, ii) provides an output of a different range/scale and therefore, can't be compared with other methods, and iii) requires different input, and therefore a human expert needs to be involved to adjust each method according to the examined model.

Bias Detection

Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems

no code implementations23 Dec 2020 Moshe Kravchik, Battista Biggio, Asaf Shabtai

With this research, we are the first to demonstrate such poisoning attacks on ICS cyber attack online NN detectors.

Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers

no code implementations30 Oct 2020 Tzvika Shapira, David Berend, Ishai Rosenberg, Yang Liu, Asaf Shabtai, Yuval Elovici

The performance of a machine learning-based malware classifier depends on the large and updated training set used to induce its model.

Malware Detection

Approximating Aggregated SQL Queries With LSTM Networks

no code implementations25 Oct 2020 Nir Regev, Lior Rokach, Asaf Shabtai

We use LSTM network to learn the relationship between queries and their results, and to provide a rapid inference layer for predicting query results.

Dynamic Adversarial Patch for Evading Object Detection Models

no code implementations25 Oct 2020 Shahar Hoory, Tzvika Shapira, Asaf Shabtai, Yuval Elovici

In order to demonstrate our attack in a real-world setup, we implemented the patches by attaching flat screens to the target object; the screens are used to present the patches and switch between them, depending on the current camera location.

Object object-detection +2

Stop Bugging Me! Evading Modern-Day Wiretapping Using Adversarial Perturbations

no code implementations24 Oct 2020 Yael Mathov, Tal Ben Senior, Asaf Shabtai, Yuval Elovici

Our results in the real world suggest that our approach is a feasible solution for privacy protection.

Taking Over the Stock Market: Adversarial Perturbations Against Algorithmic Traders

1 code implementation19 Oct 2020 Elior Nehemya, Yael Mathov, Asaf Shabtai, Yuval Elovici

In this study, we present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques to manipulate the input data stream in real time.

Algorithmic Trading BIG-bench Machine Learning +2

Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples

no code implementations7 Oct 2020 Yael Mathov, Eden Levy, Ziv Katzir, Asaf Shabtai, Yuval Elovici

We, however, argue that machine learning models trained on heterogeneous tabular data are as susceptible to adversarial manipulations as those trained on continuous or homogeneous data such as images.

BIG-bench Machine Learning

Adversarial robustness via stochastic regularization of neural activation sensitivity

no code implementations23 Sep 2020 Gil Fidel, Ron Bitton, Ziv Katzir, Asaf Shabtai

Recent works have shown that the input domain of any machine learning classifier is bound to contain adversarial examples.

Adversarial Robustness

FOOD: Fast Out-Of-Distribution Detector

1 code implementation16 Aug 2020 Guy Amit, Moshe Levy, Ishai Rosenberg, Asaf Shabtai, Yuval Elovici

Deep neural networks (DNNs) perform well at classifying inputs associated with the classes they have been trained on, which are known as in distribution inputs.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

An Automated, End-to-End Framework for Modeling Attacks From Vulnerability Descriptions

no code implementations10 Aug 2020 Hodaya Binyamini, Ron Bitton, Masaki Inokuchi, Tomohiko Yagyu, Yuval Elovici, Asaf Shabtai

Given a description of a security vulnerability, the proposed framework first extracts the relevant attack entities required to model the attack, completes missing information on the vulnerability, and derives a new interaction rule that models the attack; this new rule is integrated within MulVAL attack graph tool.

MORTON: Detection of Malicious Routines in Large-Scale DNS Traffic

no code implementations5 Aug 2020 Yael Daihes, Hen Tzaban, Asaf Nadler, Asaf Shabtai

In this paper, we present MORTON, a method that identifies compromised devices in enterprise networks based on the existence of routine DNS communication between devices and disreputable host names.

Cryptography and Security

Hierarchical Deep Reinforcement Learning Approach for Multi-Objective Scheduling With Varying Queue Sizes

no code implementations17 Jul 2020 Yoni Birman, Ziv Ido, Gilad Katz, Asaf Shabtai

In this study we present MERLIN, a robust, modular and near-optimal DRL-based approach for multi-objective task scheduling.

Position reinforcement-learning +2

Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain

no code implementations5 Jul 2020 Ihai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach

In recent years machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security.

Adversarial Attack BIG-bench Machine Learning

Autosploit: A Fully Automated Framework for Evaluating the Exploitability of Security Vulnerabilities

no code implementations30 Jun 2020 Noam Moscovich, Ron Bitton, Yakov Mallah, Masaki Inokuchi, Tomohiko Yagyu, Meir Kalech, Yuval Elovici, Asaf Shabtai

The results show that Autosploit is able to automatically identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.

Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

no code implementations7 Feb 2020 Moshe Kravchik, Asaf Shabtai

This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains, such as malware detection and image processing.

Cyber Attack Detection Data Poisoning +1

GIM: Gaussian Isolation Machines

no code implementations6 Feb 2020 Guy Amit, Ishai Rosenberg, Moshe Levy, Ron Bitton, Asaf Shabtai, Yuval Elovici

In many cases, neural network classifiers are likely to be exposed to input data that is outside of their training distribution data.

Benchmarking General Classification +1

When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures

no code implementations8 Sep 2019 Gil Fidel, Ron Bitton, Asaf Shabtai

We evaluate our method by building an extensive dataset of adversarial examples over the popular CIFAR-10 and MNIST datasets, and training a neural network-based detector to distinguish between normal and adversarial inputs.

Efficient Cyber Attacks Detection in Industrial Control Systems Using Lightweight Neural Networks and PCA

no code implementations2 Jul 2019 Moshe Kravchik, Asaf Shabtai

Finally, we study the proposed method's robustness against adversarial attacks, that exploit inherent blind spots of neural networks to evade detection while achieving their intended physical effect.

feature selection

Privacy-Preserving Detection of IoT Devices Connected Behind a NAT in a Smart Home Setup

no code implementations31 May 2019 Yair Meidan, Vinay Sachidananda, Yuval Elovici, Asaf Shabtai

Today, telecommunication service providers (telcos) are exposed to cyber-attacks executed by compromised IoT devices connected to their customers' networks.

Privacy Preserving

Transferable Cost-Aware Security Policy Implementation for Malware Detection Using Deep Reinforcement Learning

no code implementations25 May 2019 Yoni Birman, Shaked Hindi, Gilad Katz, Asaf Shabtai

This security policy is then implemented, and for each inspected file, a different set of detectors is assigned and a different detection threshold is set.

Malware Detection reinforcement-learning +2

MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses

no code implementations24 Feb 2019 Lior Sidi, Asaf Nadler, Asaf Shabtai

Domain generation algorithms (DGAs) are commonly used by botnets to generate domain names through which bots can establish a resilient communication channel with their command and control servers.

Cryptography and Security

Defense Methods Against Adversarial Examples for Recurrent Neural Networks

no code implementations28 Jan 2019 Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach

Using our methods we were able to decrease the effectiveness of such attack from 99. 9% to 15%.

Cryptography and Security

MDGAN: Boosting Anomaly Detection Using \\Multi-Discriminator Generative Adversarial Networks

no code implementations11 Oct 2018 Yotam Intrator, Gilad Katz, Asaf Shabtai

Anomaly detection is often considered a challenging field of machine learning due to the difficulty of obtaining anomalous samples for training and the need to obtain a sufficient amount of training data.

Anomaly Detection valid

Detecting Cyberattacks in Industrial Control Systems Using Convolutional Neural Networks

no code implementations21 Jun 2018 Moshe Kravchik, Asaf Shabtai

This paper presents a study on detecting cyberattacks on industrial control systems (ICS) using unsupervised deep neural networks, specifically, convolutional neural networks.

Anomaly Detection

N-BaIoT: Network-based Detection of IoT Botnet Attacks Using Deep Autoencoders

2 code implementations9 May 2018 Yair Meidan, Michael Bohadana, Yael Mathov, Yisroel Mirsky, Dominik Breitenbacher, Asaf Shabtai, Yuval Elovici

The proliferation of IoT devices which can be more easily compromised than desktop computers has led to an increase in the occurrence of IoT based botnet attacks.

Anomaly Detection

Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers

no code implementations23 Apr 2018 Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach

In this paper, we present a generic, query-efficient black-box attack against API call-based machine learning malware classifiers.

Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection

3 code implementations25 Feb 2018 Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, Asaf Shabtai

In this paper, we present Kitsune: a plug and play NIDS which can learn to detect attacks on the local network, without supervision, and in an efficient online manner.

Network Intrusion Detection

Detection of Unauthorized IoT Devices Using Machine Learning Techniques

no code implementations14 Sep 2017 Yair Meidan, Michael Bohadana, Asaf Shabtai, Martin Ochoa, Nils Ole Tippenhauer, Juan Davis Guarnizo, Yuval Elovici

Based on the classification of 20 consecutive sessions and the use of majority rule, IoT device types that are not on the white list were correctly detected as unknown in 96% of test cases (on average), and white listed device types were correctly classified by their actual types in 99% of cases.

BIG-bench Machine Learning General Classification

Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

no code implementations19 Jul 2017 Ishai Rosenberg, Asaf Shabtai, Lior Rokach, Yuval Elovici

In this paper, we present a black-box attack against API call based machine learning malware classifiers, focusing on generating adversarial sequences combining API calls and static features (e. g., printable strings) that will be misclassified by the classifier without affecting the malware functionality.

BIG-bench Machine Learning

SIPHON: Towards Scalable High-Interaction Physical Honeypots

no code implementations10 Jan 2017 Juan Guarnizo, Amit Tambe, Suman Sankar Bhunia, Martín Ochoa, Nils Tippenhauer, Asaf Shabtai, Yuval Elovici

Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months.

Cryptography and Security

Classification of Smartphone Users Using Internet Traffic

no code implementations1 Jan 2017 Andrey Finkelstein, Ron Biton, Rami Puzis, Asaf Shabtai

Today, smartphone devices are owned by a large portion of the population and have become a very popular platform for accessing the Internet.

Classification General Classification

Anomaly Detection Using the Knowledge-based Temporal Abstraction Method

no code implementations14 Dec 2016 Asaf Shabtai

According to the proposed method a temporal pattern mining process is applied on a dataset of basic temporal abstraction database in order to extract patterns representing normal behavior.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.