Search Results for author: Alsharif Abuadbba

Found 25 papers, 3 papers with code

SoK: Can Trajectory Generation Combine Privacy and Utility?

1 code implementation12 Mar 2024 Erik Buchholz, Alsharif Abuadbba, Shuo Wang, Surya Nepal, Salil S. Kanhere

This work focuses on the systematisation of the state-of-the-art generative models for trajectories in the context of the proposed framework.

Privacy Preserving

A Generative Adversarial Attack for Multilingual Text Classifiers

no code implementations16 Jan 2024 Tom Roth, Inigo Jauregi Unanue, Alsharif Abuadbba, Massimo Piccardi

Current adversarial attack algorithms, where an adversary changes a text to fool a victim model, have been repeatedly shown to be effective against text classifiers.

Adversarial Attack

SoK: Facial Deepfake Detectors

no code implementations9 Jan 2024 Binh M. Le, Jiwon Kim, Shahroz Tariq, Kristen Moore, Alsharif Abuadbba, Simon S. Woo

Our systematized analysis and experimentation lay the groundwork for a deeper understanding of deepfake detectors and their generalizability, paving the way for future research focused on creating detectors adept at countering various attack scenarios.

DeepFake Detection Face Swapping

Deepfake in the Metaverse: Security Implications for Virtual Gaming, Meetings, and Offices

no code implementations26 Mar 2023 Shahroz Tariq, Alsharif Abuadbba, Kristen Moore

This paper examines the security implications of deepfakes in the metaverse, specifically in the context of gaming, online meetings, and virtual offices.

Face Swapping

Why Do Facial Deepfake Detectors Fail?

no code implementations25 Feb 2023 Binh Le, Shahroz Tariq, Alsharif Abuadbba, Kristen Moore, Simon Woo

Recent rapid advancements in deepfake technology have allowed the creation of highly realistic fake media, such as video, image, and audio.

DeepFake Detection Face Swapping +1

Cybersecurity Challenges of Power Transformers

no code implementations25 Feb 2023 Hossein Rahimpour, Joe Tusek, Alsharif Abuadbba, Aruna Seneviratne, Toan Phung, Ahmed Musleh, Boyu Liu

The rise of cyber threats on critical infrastructure and its potential for devastating consequences, has significantly increased.

DeepTaster: Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in Deep Neural Networks

no code implementations24 Nov 2022 Seonhye Park, Alsharif Abuadbba, Shuo Wang, Kristen Moore, Yansong Gao, Hyoungshick Kim, Surya Nepal

In this study, we introduce DeepTaster, a novel DNN fingerprinting technique, to address scenarios where a victim's data is unlawfully used to build a suspect model.

Data Augmentation Transfer Learning

TransCAB: Transferable Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World

1 code implementation6 Sep 2022 Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, Derek Abbott

We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers.

Event Detection Image Classification +4

PhishClone: Measuring the Efficacy of Cloning Evasion Attacks

no code implementations4 Sep 2022 Arthur Wong, Alsharif Abuadbba, Mahathir Almashor, Salil Kanhere

We then reported our sites to VirusTotal and other platforms, with regular polling of results for 7 days, to ascertain the efficacy of each cloning technique.

Profiler: Profile-Based Model to Detect Phishing Emails

no code implementations18 Aug 2022 Mariya Shmalko, Alsharif Abuadbba, Raj Gaire, Tingmin Wu, Hye-Young Paik, Surya Nepal

The Profiler does not require large data sets to train on to be effective and its analysis of varied email features reduces the impact of concept drift.

Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction

no code implementations10 Jun 2022 Ngoc Duy Pham, Alsharif Abuadbba, Yansong Gao, Tran Khoa Phan, Naveen Chilamkurti

Experimental results with different datasets have affirmed the advantages of the B-SL models compared with several benchmark models.

Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures

no code implementations13 Apr 2022 Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao

Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models.

Towards Web Phishing Detection Limitations and Mitigation

no code implementations3 Apr 2022 Alsharif Abuadbba, Shuo Wang, Mahathir Almashor, Muhammed Ejaz Ahmed, Raj Gaire, Seyit Camtepe, Surya Nepal

However, with an average of 10K phishing links reported per hour to platforms such as PhishTank and VirusTotal (VT), the deficiencies of such ML-based solutions are laid bare.

Attribute

Email Summarization to Assist Users in Phishing Identification

no code implementations24 Mar 2022 Amir Kashapov, Tingmin Wu, Alsharif Abuadbba, Carsten Rudolph

Cyber-phishing attacks recently became more precise, targeted, and tailored by training data to activate only in the presence of specific information or cues.

NTD: Non-Transferability Enabled Backdoor Detection

no code implementations22 Nov 2021 Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, Derek Abbott

A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments.

Face Recognition Traffic Sign Recognition

RAIDER: Reinforcement-aided Spear Phishing Detector

no code implementations17 May 2021 Keelan Evans, Alsharif Abuadbba, Tingmin Wu, Kristen Moore, Mohiuddin Ahmed, Ganna Pogrebna, Surya Nepal, Mike Johnstone

RAIDER also keeps the number of features to a minimum by selecting only the significant features to represent phishing emails and detect spear-phishing attacks.

Binary Classification reinforcement-learning +1

OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in Distributed Learning

no code implementations3 May 2021 Shuo Wang, Surya Nepal, Kristen Moore, Marthie Grobler, Carsten Rudolph, Alsharif Abuadbba

We introduce a new distributed/collaborative learning scheme to address communication overhead via latent compression, leveraging global data while providing privatization of local data without additional cost due to encryption or perturbation.

Disentanglement Federated Learning

Token-Modification Adversarial Attacks for Natural Language Processing: A Survey

no code implementations1 Mar 2021 Tom Roth, Yansong Gao, Alsharif Abuadbba, Surya Nepal, Wei Liu

Many adversarial attacks target natural language processing systems, most of which succeed through modifying the individual tokens of a document.

DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN

no code implementations12 Jan 2021 Alsharif Abuadbba, Hyoungshick Kim, Surya Nepal

In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks.

Autonomous Vehicles

Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

no code implementations8 Oct 2020 Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad Ejaz Ahmed, Hyoungshick Kim, Surya Nepal

To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds.

Steganalysis

Evaluation of Federated Learning in Phishing Email Detection

no code implementations27 Jul 2020 Chandra Thapa, Jun Wen Tang, Alsharif Abuadbba, Yansong Gao, Seyit Camtepe, Surya Nepal, Mahathir Almashor, Yifeng Zheng

For a fixed total email dataset, the global RNN based model suffers by a 1. 8% accuracy drop when increasing organizational counts from 2 to 10.

Distributed Computing Federated Learning +2

Adversarial Defense by Latent Style Transformations

no code implementations17 Jun 2020 Shuo Wang, Surya Nepal, Alsharif Abuadbba, Carsten Rudolph, Marthie Grobler

The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations, e. g., slightly changing the facial expression of human portraits.

Adversarial Defense

Reliability and Robustness analysis of Machine Learning based Phishing URL Detectors

1 code implementation18 May 2020 Bushra Sabir, M. Ali Babar, Raj Gaire, Alsharif Abuadbba

Therefore, the security vulnerabilities of these systems, in general, remain primarily unknown which calls for testing the robustness of these systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.