no code implementations • 5 Aug 2024 • Muhammad Salman, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Muhammad Ikram, Sidharth Kaushik, Mohamed Ali Kaafar
They have been demonstrated to pose significant challenges in domains like image classification, with results showing that an adversarially perturbed image to evade detection against one classifier is most likely transferable to other classifiers.
1 code implementation • 26 Jun 2024 • Conor Atkins, Ian Wood, Mohamed Ali Kaafar, Hassan Asghar, Nardine Basta, Michal Kepkowski
We present ConvoCache, a conversational caching system that solves the problem of slow and expensive generative AI models in spoken chatbots.
no code implementations • 31 May 2023 • Houssem Jmal, Firas Ben Hmida, Nardine Basta, Muhammad Ikram, Mohamed Ali Kaafar, Andy Walker
Attack paths are the potential chain of malicious activities an attacker performs to compromise network assets and acquire privileges through exploiting network vulnerabilities.
1 code implementation • 6 Apr 2023 • Conor Atkins, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Ian Wood, Mohamed Ali Kaafar
We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement.
no code implementations • 9 Jan 2023 • Nan Wu, Dinusha Vatsalan, Mohamed Ali Kaafar, Sanath Kumar Ramesh
Several applications require counting the number of distinct items in the data, which is known as the cardinality counting problem.
no code implementations • 3 Apr 2022 • Zhigang Lu, Hassan Jameel Asghar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson
Under a black-box setting, based on this global sensitivity, to control the overall noise injection, we propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron (via the exponential mechanism) at the output layer of a baseline non-private neural network trained with a convexified loss function.
no code implementations • 12 Mar 2021 • Benjamin Zi Hao Zhao, Aviral Agrawal, Catisha Coburn, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson
In this paper, we take a closer look at another inference attack reported in literature, called attribute inference, whereby an attacker tries to infer missing attributes of a partially known record used in the training dataset by accessing the machine learning model as an API.
1 code implementation • 20 Aug 2020 • Benjamin Zi Hao Zhao, Mohamed Ali Kaafar, Nicolas Kourtellis
In this work, we empirically evaluate various implementations of differential privacy (DP), and measure their ability to fend off real-world privacy attacks, in addition to measuring their core goal of providing accurate classifications.
Cryptography and Security
no code implementations • 18 Mar 2020 • Farhad Farokhi, Nan Wu, David Smith, Mohamed Ali Kaafar
The experiments illustrate that collaboration among more than 10 data owners with at least 10, 000 records with privacy budgets greater than or equal to 1 results in a superior machine-learning model in comparison to a model trained in isolation on only one of the datasets, illustrating the value of collaboration and the cost of the privacy.
no code implementations • 29 Jan 2020 • Farhad Farokhi, Mohamed Ali Kaafar
We use conditional mutual information leakage to measure the amount of information leakage from the trained machine learning model about the presence of an individual in the training dataset.
1 code implementation • 13 Jan 2020 • Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Mohamed Ali Kaafar
The average false positive rate (FPR) of the system, i. e., the rate at which an impostor is incorrectly accepted as the legitimate user, may be interpreted as a measure of the success probability of such an attack.
no code implementations • 28 Aug 2019 • Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar
A number of recent works have demonstrated that API access to machine learning models leaks information about the dataset records used to train the models.
no code implementations • 24 Jun 2019 • Nan Wu, Farhad Farokhi, David Smith, Mohamed Ali Kaafar
In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets.
no code implementations • 22 May 2019 • Muhammad Ikram, Pierrick Beaume, Mohamed Ali Kaafar
We examine the graph features of mobile apps code by building weighted directed graphs of the API calls, and verify that malicious apps often share structural similarities that can be used to differentiate them from benign apps, even under a heavily "polluted" training set where a large majority of the apps are obfuscated.
Cryptography and Security
no code implementations • 6 Jun 2018 • Parameswaran Kamalaruban, Victor Perrier, Hassan Jameel Asghar, Mohamed Ali Kaafar
However, it provides the same level of protection for all elements (individuals and attributes) in the data.
no code implementations • 5 Jul 2017 • Amit Tiroshi, Tsvi Kuflik, Shlomo Berkovsky, Mohamed Ali Kaafar
The proposed approach is domain-independent (demonstrated on data from movies, music, and business recommender systems), and is evaluated using several state-of-the-art machine learning methods, on different recommendation tasks, and using different evaluation metrics.
no code implementations • 7 Sep 2014 • Emiliano De Cristofaro, Arik Friedman, Guillaume Jourjon, Mohamed Ali Kaafar, M. Zubair Shafiq
Facebook pages offer an easy way to reach out to a very large audience as they can easily be promoted using Facebook's advertising platform.
Social and Information Networks Cryptography and Security Physics and Society