no code implementations • 12 Apr 2024 • Farhad Farokhi
The optimal universal encoding strategy, i. e., the encoding strategy that maximizes the maximal quantum leakage, is proved to be attained by pure states.
no code implementations • 20 Sep 2023 • Tian Hui, Farhad Farokhi, Olga Ohrimenko
We validate that two snapshots of the model can result in higher information leakage in comparison to having access to only the updated model.
no code implementations • 18 Sep 2023 • Yuen-Man Pun, Farhad Farokhi, Iman Shames
In this work, we consider a sequence of stochastic optimization problems following a time-varying distribution via the lens of online optimization.
no code implementations • 1 Nov 2021 • Farhad Farokhi, Alex S. Leong, Mohammad Zamani, Iman Shames
A learning-based safety filter is developed for discrete-time linear time-invariant systems with unknown models subject to Gaussian noises with unknown covariance.
no code implementations • 12 Oct 2021 • Junsoo Kim, Farhad Farokhi, Iman Shames, Hyungbo Shim
In this note, we demonstrate that it is possible to run a dynamic controller over encrypted data for an infinite time horizon if the output of the controller can be represented as a function of a fixed number of previous inputs and outputs.
no code implementations • 2 Mar 2021 • Farhad Farokhi, Alex Leong, Iman Shames, Mohammad Zamani
We show that with an arbitrarily large probability we can guarantee that the state will remain in the safe set, while learning and control are carried out simultaneously, provided that a feasible solution exists for the optimization problem.
no code implementations • 18 Jan 2021 • Farhad Farokhi
Hence, the problem of finding the optimal pre-processing regiment for enforcing fairness can be cast as minimizing total variations distance between the distribution of the data before and after pre-processing subject to a constraint on the total variation distance between the distribution of the inputs given protected attributes.
no code implementations • 30 Nov 2020 • Farhad Farokhi
We prove that, for small privacy budgets, compression can improve performance of privacy-preserving machine learning models.
no code implementations • 24 Nov 2020 • Bo Liu, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi, Zihuai Lin
The newly emerged machine learning (e. g. deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial technology, and surveillance systems.
no code implementations • 28 Aug 2020 • Farhad Farokhi
However, locally differential data can twist the probability density of the data because of the additive noise used to ensure privacy.
no code implementations • 24 Jun 2020 • Farhad Farokhi
For general distributions, the distributionally-robust optimization problem can relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer.
no code implementations • 2 Jun 2020 • Iman Shames, Farhad Farokhi
Distributionally-robust optimization is often studied for a fixed set of distributions rather than time-varying distributions that can drift significantly over time (which is, for instance, the case in finance and sociology due to underlying expansion of economy and evolution of demographics).
no code implementations • 24 Mar 2020 • Amir Saberi, Farhad Farokhi, Girish N. Nair
We investigate state estimation of linear systems over channels having a finite state not known by the transmitter or receiver.
no code implementations • 18 Mar 2020 • Farhad Farokhi, Nan Wu, David Smith, Mohamed Ali Kaafar
The experiments illustrate that collaboration among more than 10 data owners with at least 10, 000 records with privacy budgets greater than or equal to 1 results in a superior machine-learning model in comparison to a model trained in isolation on only one of the datasets, illustrating the value of collaboration and the cost of the privacy.
1 code implementation • 17 Feb 2020 • Shakila Mahjabin Tonni, Dinusha Vatsalan, Farhad Farokhi, Dali Kaafar, Zhigang Lu, Gioacchino Tangari
Our results reveal the relationship between MIA accuracy and properties of the dataset and training model in use.
no code implementations • 9 Feb 2020 • Ghassen Zafzouf, Girish N. Nair, Farhad Farokhi
This paper addresses the problem of distributed state estimation via multiple access channels (MACs).
no code implementations • 29 Jan 2020 • Farhad Farokhi
We provide performance guarantees for the trained model on the original data (not including the poison records) by training the model for the worst-case distribution on a neighbourhood around the empirical distribution (extracted from the training dataset corrupted by a poisoning attack) defined using the Wasserstein distance.
no code implementations • 29 Jan 2020 • Farhad Farokhi, Mohamed Ali Kaafar
We use conditional mutual information leakage to measure the amount of information leakage from the trained machine learning model about the presence of an individual in the training dataset.
no code implementations • 29 Dec 2019 • Farhad Farokhi
The optimal noise distribution is determined by maximizing a weighted sum of the measures of privacy and utility.
no code implementations • 29 Oct 2019 • Farhad Farokhi
We prove several guarantees for noiselessly-private mechanisms.
no code implementations • 24 Jun 2019 • Farhad Farokhi
This results in computing a linear support vector machine classifier that is robust against adversarial input manipulations.
no code implementations • 24 Jun 2019 • Nan Wu, Farhad Farokhi, David Smith, Mohamed Ali Kaafar
In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets.