no code implementations • 5 Aug 2024 • Muhammad Salman, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Muhammad Ikram, Sidharth Kaushik, Mohamed Ali Kaafar
They have been demonstrated to pose significant challenges in domains like image classification, with results showing that an adversarially perturbed image to evade detection against one classifier is most likely transferable to other classifiers.
no code implementations • 27 Apr 2024 • Ali Reza Ghavamipour, Benjamin Zi Hao Zhao, Fatih Turkmen
Decentralized learning (DL) offers a novel paradigm in machine learning by distributing training across clients without central aggregation, enhancing scalability and efficiency.
no code implementations • 27 Apr 2024 • Ali Reza Ghavamipour, Benjamin Zi Hao Zhao, Oguzhan Ersoy, Fatih Turkmen
Decentralized machine learning (DL) has been receiving an increasing interest recently due to the elimination of a single point of failure, present in Federated learning setting.
1 code implementation • 6 Apr 2023 • Conor Atkins, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Ian Wood, Mohamed Ali Kaafar
We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement.
no code implementations • 7 Dec 2022 • Benjamin Tag, Niels van Berkel, Sunny Verma, Benjamin Zi Hao Zhao, Shlomo Berkovsky, Dali Kaafar, Vassilis Kostakos, Olga Ohrimenko
Artificial Intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient.
no code implementations • 4 Nov 2022 • Rana Salal Ali, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Tham Nguyen, Ian David Wood, Dali Kaafar
In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets.
no code implementations • 22 Oct 2021 • Wanchuang Zhu, Benjamin Zi Hao Zhao, Simon Luo, Tongliang Liu, Ke Deng
Although we know that the benign gradients and Byzantine attacked gradients are distributed differently, to detect the malicious gradients is challenging due to (1) the gradient is high-dimensional and each dimension has its unique distribution and (2) the benign gradients and the attacked gradients are always mixed (two-sample test methods cannot apply directly).
1 code implementation • 1 May 2021 • Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, Jialiang Lu
We are able to demonstrate the adversary's high success rate of attacks, while maintaining functionality for regular users, with triggers inconspicuous by the human administrators.
no code implementations • 12 Mar 2021 • Benjamin Zi Hao Zhao, Aviral Agrawal, Catisha Coburn, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson
In this paper, we take a closer look at another inference attack reported in literature, called attribute inference, whereby an attacker tries to infer missing attributes of a partially known record used in the training dataset by accessing the machine learning model as an API.
no code implementations • 23 Feb 2021 • Liuqiao Chen, Hu Wang, Benjamin Zi Hao Zhao, Minhui Xue, Haifeng Qian
Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy.
1 code implementation • 20 Aug 2020 • Benjamin Zi Hao Zhao, Mohamed Ali Kaafar, Nicolas Kourtellis
In this work, we empirically evaluate various implementations of differential privacy (DP), and measure their ability to fend off real-world privacy attacks, in addition to measuring their core goal of providing accurate classifications.
Cryptography and Security
no code implementations • 16 Jul 2020 • Shaofeng Li, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao
The trigger can take a plethora of forms, including a special object present in the image (e. g., a yellow pad), a shape filled with custom textures (e. g., logos with particular colors) or even image-wide stylizations with special filters (e. g., images altered by Nashville or Gotham filters).
no code implementations • 21 Jun 2020 • Jialin Wen, Benjamin Zi Hao Zhao, Minhui Xue, Alina Oprea, Haifeng Qian
To this end, we analyze and develop a new poisoning attack algorithm.
1 code implementation • 13 Jan 2020 • Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Mohamed Ali Kaafar
The average false positive rate (FPR) of the system, i. e., the rate at which an impostor is incorrectly accepted as the legitimate user, may be interpreted as a measure of the success probability of such an attack.
1 code implementation • 6 Sep 2019 • Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang
We show that the proposed invisible backdoors can be fairly effective across various DNN models as well as four datasets MNIST, CIFAR-10, CIFAR-100, and GTSRB, by measuring their attack success rates for the adversary, functionality for the normal users, and invisibility scores for the administrators.
no code implementations • 28 Aug 2019 • Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar
A number of recent works have demonstrated that API access to machine learning models leaks information about the dataset records used to train the models.