1 code implementation • 12 Aug 2024 • Hyunmin Choi, Jiwon Kim, Chiyoung Song, Simon S. Woo, Hyoungshick Kim
We present Blind-Match, a novel biometric identification system that leverages homomorphic encryption (HE) for efficient and privacy-preserving 1:N matching.
1 code implementation • WWW: Proceedings of the ACM on Web Conference 2024 • Kiho Lee, Chaejin Lim, Beomjin Jin, TaeYoung Kim, Hyoungshick Kim
Conventional ad blocking and tracking prevention tools often fall short in addressing web content manipulation.
no code implementations • 26 Mar 2024 • Jake Hesford, Daniel Cheng, Alan Wan, Larry Huynh, Seungho Kim, Hyoungshick Kim, Jin B. Hong
Our paper provides empirical comparisons between recent IDSs to provide an objective comparison between them to help users choose the most appropriate solution based on their requirements.
1 code implementation • 12 Jul 2023 • Eldor Abdukhamidov, Mohammed Abuhamad, George K. Thiruvathukal, Hyoungshick Kim, Tamer Abuhmed
The universal perturbation is stochastically and iteratively optimized by minimizing the adversarial loss that is designed to consider both the classifier and interpreter costs in targeted and non-targeted categories.
no code implementations • 24 Nov 2022 • Seonhye Park, Alsharif Abuadbba, Shuo Wang, Kristen Moore, Yansong Gao, Hyoungshick Kim, Surya Nepal
In this study, we introduce DeepTaster, a novel DNN fingerprinting technique, to address scenarios where a victim's data is unlawfully used to build a suspect model.
no code implementations • 21 Jan 2022 • Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
The averaged ASR still remains sufficiently high to be 78% in the transfer learning attack scenarios evaluated on CenterNet.
1 code implementation • 3 Mar 2021 • Yansong Gao, Minki Kim, Chandra Thapa, Sharif Abuadbba, Zhi Zhang, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal
Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices.
no code implementations • 29 Jan 2021 • Muhammad Ejaz Ahmed, Hyoungshick Kim, Seyit Camtepe, Surya Nepal
Based on those characteristics, we develop Peeler that continuously monitors a target system's kernel events and detects ransomware attacks on the system.
Malware Detection
Cryptography and Security
no code implementations • 12 Jan 2021 • Alsharif Abuadbba, Hyoungshick Kim, Surya Nepal
In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks.
no code implementations • 8 Oct 2020 • Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad Ejaz Ahmed, Hyoungshick Kim, Surya Nepal
To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds.
1 code implementation • 21 Jul 2020 • Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor. Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks.
1 code implementation • 16 Jun 2020 • Bedeuro Kim, Sharif Abuadbba, Hyoungshick Kim
To show the feasibility of DeepCapture, we evaluate its performance with publicly available datasets consisting of 6, 000 spam and 2, 313 non-spam image samples.
no code implementations • 22 Apr 2020 • William Aiken, Hyoungshick Kim, Simon Woo
Creating a state-of-the-art deep-learning system requires vast amounts of data, expertise, and hardware, yet research into embedding copyright protection for neural networks has been limited.
1 code implementation • 30 Mar 2020 • Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa, Kyuyeon Kim, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal
For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data.
1 code implementation • 16 Mar 2020 • Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A. Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal
We observed that the 1D CNN model under split learning can achieve the same accuracy of 98. 9\% like the original (non-split) model.
3 code implementations • 23 Nov 2019 • Yansong Gao, Yeonjae Kim, Bao Gia Doan, Zhi Zhang, Gongxuan Zhang, Surya Nepal, Damith C. Ranasinghe, Hyoungshick Kim
In particular, for vision tasks, we can always achieve a 0% FRR and FAR.
Cryptography and Security