no code implementations • 5 Aug 2024 • Muhammad Salman, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Muhammad Ikram, Sidharth Kaushik, Mohamed Ali Kaafar
They have been demonstrated to pose significant challenges in domains like image classification, with results showing that an adversarially perturbed image to evade detection against one classifier is most likely transferable to other classifiers.
no code implementations • 4 Oct 2023 • Hassan Jameel Asghar, Zhigang Lu, Zhongrui Zhao, Dali Kaafar
In this work, we construct an interactive protocol for this problem based on the fully homomorphic encryption scheme over the Torus (TFHE) and label differential privacy, where the underlying machine learning model is a neural network.
no code implementations • 12 Apr 2023 • Gioacchino Tangari, Shreesh Keskar, Hassan Jameel Asghar, Dali Kaafar
For the biometric authentication use case, we need to investigate this under adversarial settings where an attacker has access to a feature-space representation but no direct access to the exact original dataset nor the original learned model.
1 code implementation • 6 Apr 2023 • Conor Atkins, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Ian Wood, Mohamed Ali Kaafar
We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement.
no code implementations • 4 Nov 2022 • Rana Salal Ali, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Tham Nguyen, Ian David Wood, Dali Kaafar
In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets.
no code implementations • 3 Apr 2022 • Zhigang Lu, Hassan Jameel Asghar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson
Under a black-box setting, based on this global sensitivity, to control the overall noise injection, we propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron (via the exponential mechanism) at the output layer of a baseline non-private neural network trained with a convexified loss function.
no code implementations • 12 Mar 2021 • Benjamin Zi Hao Zhao, Aviral Agrawal, Catisha Coburn, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson
In this paper, we take a closer look at another inference attack reported in literature, called attribute inference, whereby an attacker tries to infer missing attributes of a partially known record used in the training dataset by accessing the machine learning model as an API.
1 code implementation • 13 Jan 2020 • Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Mohamed Ali Kaafar
The average false positive rate (FPR) of the system, i. e., the rate at which an impostor is incorrectly accepted as the legitimate user, may be interpreted as a measure of the success probability of such an attack.
no code implementations • 28 Aug 2019 • Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar
A number of recent works have demonstrated that API access to machine learning models leaks information about the dataset records used to train the models.
no code implementations • 6 Jun 2018 • Parameswaran Kamalaruban, Victor Perrier, Hassan Jameel Asghar, Mohamed Ali Kaafar
However, it provides the same level of protection for all elements (individuals and attributes) in the data.