Search Results for author: Hyoungshick Kim

Found 14 papers, 7 papers with code

Expectations Versus Reality: Evaluating Intrusion Detection Systems in Practice

no code implementations26 Mar 2024 Jake Hesford, Daniel Cheng, Alan Wan, Larry Huynh, Seungho Kim, Hyoungshick Kim, Jin B. Hong

Our paper provides empirical comparisons between recent IDSs to provide an objective comparison between them to help users choose the most appropriate solution based on their requirements.

Intrusion Detection

Single-Class Target-Specific Attack against Interpretable Deep Learning Systems

1 code implementation12 Jul 2023 Eldor Abdukhamidov, Mohammed Abuhamad, George K. Thiruvathukal, Hyoungshick Kim, Tamer Abuhmed

The universal perturbation is stochastically and iteratively optimized by minimizing the adversarial loss that is designed to consider both the classifier and interpreter costs in targeted and non-targeted categories.

Adversarial Attack

DeepTaster: Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in Deep Neural Networks

no code implementations24 Nov 2022 Seonhye Park, Alsharif Abuadbba, Shuo Wang, Kristen Moore, Yansong Gao, Hyoungshick Kim, Surya Nepal

In this study, we introduce DeepTaster, a novel DNN fingerprinting technique, to address scenarios where a victim's data is unlawfully used to build a suspect model.

Data Augmentation Transfer Learning

Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things

1 code implementation3 Mar 2021 Yansong Gao, Minki Kim, Chandra Thapa, Sharif Abuadbba, Zhi Zhang, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal

Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices.

BIG-bench Machine Learning Federated Learning

Peeler: Profiling Kernel-Level Events to Detect Ransomware

no code implementations29 Jan 2021 Muhammad Ejaz Ahmed, Hyoungshick Kim, Seyit Camtepe, Surya Nepal

Based on those characteristics, we develop Peeler that continuously monitors a target system's kernel events and detects ransomware attacks on the system.

Malware Detection Cryptography and Security

DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN

no code implementations12 Jan 2021 Alsharif Abuadbba, Hyoungshick Kim, Surya Nepal

In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks.

Autonomous Vehicles

Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

no code implementations8 Oct 2020 Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad Ejaz Ahmed, Hyoungshick Kim, Surya Nepal

To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds.

Steganalysis

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

1 code implementation21 Jul 2020 Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim

We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor. Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks.

DeepCapture: Image Spam Detection Using Deep Learning and Data Augmentation

1 code implementation16 Jun 2020 Bedeuro Kim, Sharif Abuadbba, Hyoungshick Kim

To show the feasibility of DeepCapture, we evaluate its performance with publicly available datasets consisting of 6, 000 spam and 2, 313 non-spam image samples.

Data Augmentation Spam detection

Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks

no code implementations22 Apr 2020 William Aiken, Hyoungshick Kim, Simon Woo

Creating a state-of-the-art deep-learning system requires vast amounts of data, expertise, and hardware, yet research into embedding copyright protection for neural networks has been limited.

End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things

1 code implementation30 Mar 2020 Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa, Kyuyeon Kim, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal

For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data.

Federated Learning

Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?

1 code implementation16 Mar 2020 Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A. Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal

We observed that the 1D CNN model under split learning can achieve the same accuracy of 98. 9\% like the original (non-split) model.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.