1 code implementation • 12 Mar 2024 • Erik Buchholz, Alsharif Abuadbba, Shuo Wang, Surya Nepal, Salil S. Kanhere
This work focuses on the systematisation of the state-of-the-art generative models for trajectories in the context of the proposed framework.
no code implementations • 16 Jan 2024 • Tom Roth, Inigo Jauregi Unanue, Alsharif Abuadbba, Massimo Piccardi
Current adversarial attack algorithms, where an adversary changes a text to fool a victim model, have been repeatedly shown to be effective against text classifiers.
no code implementations • 9 Jan 2024 • Binh M. Le, Jiwon Kim, Shahroz Tariq, Kristen Moore, Alsharif Abuadbba, Simon S. Woo
Our systematized analysis and experimentation lay the groundwork for a deeper understanding of deepfake detectors and their generalizability, paving the way for future research focused on creating detectors adept at countering various attack scenarios.
no code implementations • 26 Mar 2023 • Shahroz Tariq, Alsharif Abuadbba, Kristen Moore
This paper examines the security implications of deepfakes in the metaverse, specifically in the context of gaming, online meetings, and virtual offices.
no code implementations • 25 Feb 2023 • Binh Le, Shahroz Tariq, Alsharif Abuadbba, Kristen Moore, Simon Woo
Recent rapid advancements in deepfake technology have allowed the creation of highly realistic fake media, such as video, image, and audio.
no code implementations • 25 Feb 2023 • Hossein Rahimpour, Joe Tusek, Alsharif Abuadbba, Aruna Seneviratne, Toan Phung, Ahmed Musleh, Boyu Liu
The rise of cyber threats on critical infrastructure and its potential for devastating consequences, has significantly increased.
no code implementations • 24 Nov 2022 • Seonhye Park, Alsharif Abuadbba, Shuo Wang, Kristen Moore, Yansong Gao, Hyoungshick Kim, Surya Nepal
In this study, we introduce DeepTaster, a novel DNN fingerprinting technique, to address scenarios where a victim's data is unlawfully used to build a suspect model.
1 code implementation • 6 Sep 2022 • Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers.
no code implementations • 4 Sep 2022 • Arthur Wong, Alsharif Abuadbba, Mahathir Almashor, Salil Kanhere
We then reported our sites to VirusTotal and other platforms, with regular polling of results for 7 days, to ascertain the efficacy of each cloning technique.
no code implementations • 18 Aug 2022 • Mariya Shmalko, Alsharif Abuadbba, Raj Gaire, Tingmin Wu, Hye-Young Paik, Surya Nepal
The Profiler does not require large data sets to train on to be effective and its analysis of varied email features reduces the impact of concept drift.
no code implementations • 10 Jun 2022 • Ngoc Duy Pham, Alsharif Abuadbba, Yansong Gao, Tran Khoa Phan, Naveen Chilamkurti
Experimental results with different datasets have affirmed the advantages of the B-SL models compared with several benchmark models.
no code implementations • 13 Apr 2022 • Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao
Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models.
no code implementations • 3 Apr 2022 • Alsharif Abuadbba, Shuo Wang, Mahathir Almashor, Muhammed Ejaz Ahmed, Raj Gaire, Seyit Camtepe, Surya Nepal
However, with an average of 10K phishing links reported per hour to platforms such as PhishTank and VirusTotal (VT), the deficiencies of such ML-based solutions are laid bare.
no code implementations • 24 Mar 2022 • Amir Kashapov, Tingmin Wu, Alsharif Abuadbba, Carsten Rudolph
Cyber-phishing attacks recently became more precise, targeted, and tailored by training data to activate only in the presence of specific information or cues.
no code implementations • 21 Jan 2022 • Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
The averaged ASR still remains sufficiently high to be 78% in the transfer learning attack scenarios evaluated on CenterNet.
no code implementations • 22 Nov 2021 • Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, Derek Abbott
A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments.
no code implementations • 20 Aug 2021 • Hua Ma, Huming Qiu, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Minhui Xue, Anmin Fu, Zhang Jiliang, Said Al-Sarawi, Derek Abbott
This work reveals that the standard quantization toolkits can be abused to activate a backdoor.
no code implementations • 17 May 2021 • Keelan Evans, Alsharif Abuadbba, Tingmin Wu, Kristen Moore, Mohiuddin Ahmed, Ganna Pogrebna, Surya Nepal, Mike Johnstone
RAIDER also keeps the number of features to a minimum by selecting only the significant features to represent phishing emails and detect spear-phishing attacks.
no code implementations • 3 May 2021 • Shuo Wang, Surya Nepal, Kristen Moore, Marthie Grobler, Carsten Rudolph, Alsharif Abuadbba
We introduce a new distributed/collaborative learning scheme to address communication overhead via latent compression, leveraging global data while providing privatization of local data without additional cost due to encryption or perturbation.
no code implementations • 1 Mar 2021 • Tom Roth, Yansong Gao, Alsharif Abuadbba, Surya Nepal, Wei Liu
Many adversarial attacks target natural language processing systems, most of which succeed through modifying the individual tokens of a document.
no code implementations • 12 Jan 2021 • Alsharif Abuadbba, Hyoungshick Kim, Surya Nepal
In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks.
no code implementations • 8 Oct 2020 • Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad Ejaz Ahmed, Hyoungshick Kim, Surya Nepal
To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds.
no code implementations • 27 Jul 2020 • Chandra Thapa, Jun Wen Tang, Alsharif Abuadbba, Yansong Gao, Seyit Camtepe, Surya Nepal, Mahathir Almashor, Yifeng Zheng
For a fixed total email dataset, the global RNN based model suffers by a 1. 8% accuracy drop when increasing organizational counts from 2 to 10.
no code implementations • 17 Jun 2020 • Shuo Wang, Surya Nepal, Alsharif Abuadbba, Carsten Rudolph, Marthie Grobler
The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations, e. g., slightly changing the facial expression of human portraits.
1 code implementation • 18 May 2020 • Bushra Sabir, M. Ali Babar, Raj Gaire, Alsharif Abuadbba
Therefore, the security vulnerabilities of these systems, in general, remain primarily unknown which calls for testing the robustness of these systems.