Search Results for author: Hassan Jameel Asghar

Found 9 papers, 2 papers with code

Practical, Private Assurance of the Value of Collaboration

no code implementations4 Oct 2023 Hassan Jameel Asghar, Zhigang Lu, Zhongrui Zhao, Dali Kaafar

In this work, we construct an interactive protocol for this problem based on the fully homomorphic encryption scheme over the Torus (TFHE) and label differential privacy, where the underlying machine learning model is a neural network.

On the Adversarial Inversion of Deep Biometric Representations

no code implementations12 Apr 2023 Gioacchino Tangari, Shreesh Keskar, Hassan Jameel Asghar, Dali Kaafar

For the biometric authentication use case, we need to investigate this under adversarial settings where an attacker has access to a feature-space representation but no direct access to the exact original dataset nor the original learned model.

Those Aren't Your Memories, They're Somebody Else's: Seeding Misinformation in Chat Bot Memories

1 code implementation6 Apr 2023 Conor Atkins, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Ian Wood, Mohamed Ali Kaafar

We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement.

Misinformation

Unintended Memorization and Timing Attacks in Named Entity Recognition Models

no code implementations4 Nov 2022 Rana Salal Ali, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Tham Nguyen, Ian David Wood, Dali Kaafar

In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets.

Memorization named-entity-recognition +2

A Differentially Private Framework for Deep Learning with Convexified Loss Functions

no code implementations3 Apr 2022 Zhigang Lu, Hassan Jameel Asghar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson

Under a black-box setting, based on this global sensitivity, to control the overall noise injection, we propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron (via the exponential mechanism) at the output layer of a baseline non-private neural network trained with a convexified loss function.

On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

no code implementations12 Mar 2021 Benjamin Zi Hao Zhao, Aviral Agrawal, Catisha Coburn, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson

In this paper, we take a closer look at another inference attack reported in literature, called attribute inference, whereby an attacker tries to infer missing attributes of a partially known record used in the training dataset by accessing the machine learning model as an API.

Attribute BIG-bench Machine Learning +1

On the Resilience of Biometric Authentication Systems against Random Inputs

1 code implementation13 Jan 2020 Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Mohamed Ali Kaafar

The average false positive rate (FPR) of the system, i. e., the rate at which an impostor is incorrectly accepted as the legitimate user, may be interpreted as a measure of the success probability of such an attack.

BIG-bench Machine Learning

On Inferring Training Data Attributes in Machine Learning Models

no code implementations28 Aug 2019 Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar

A number of recent works have demonstrated that API access to machine learning models leaks information about the dataset records used to train the models.

Attribute BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.