no code implementations • 9 Jan 2023 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.
no code implementations • 23 Nov 2022 • Adam Dziedzic, Christopher A Choquette-Choo, Natalie Dullerud, Vinith Menon Suriyakumar, Ali Shahin Shamsabadi, Muhammad Ahmad Kaleem, Somesh Jha, Nicolas Papernot, Xiao Wang
We use our mechanisms to enable privacy-preserving multi-label learning in the central setting by extending the canonical single-label technique: PATE.
no code implementations • 1 Jun 2022 • Chau Yi Li, Ricardo Sánchez-Matilla, Ali Shahin Shamsabadi, Riccardo Mazzon, Andrea Cavallaro
We refer to this property as the reversibility of an adversarial attack, and quantify reversibility as the accuracy in retrieving the original class or the true class of an adversarial example.
1 code implementation • 2 Mar 2022 • Sina Sajadmanesh, Ali Shahin Shamsabadi, Aurélien Bellet, Daniel Gatica-Perez
In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP).
no code implementations • 23 Feb 2022 • Ali Shahin Shamsabadi, Brij Mohan Lal Srivastava, Aurélien Bellet, Nathalie Vauquier, Emmanuel Vincent, Mohamed Maouche, Marc Tommasi, Nicolas Papernot
We remove speaker information from these attributes by introducing differentially private feature extractors based on an autoencoder and an automatic speech recognizer, respectively, trained using noise layers.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 6 Feb 2022 • Shimaa Ahmed, Yash Wani, Ali Shahin Shamsabadi, Mohammad Yaghini, Ilia Shumailov, Nicolas Papernot, Kassem Fawaz
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning.
1 code implementation • 6 Dec 2021 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.
no code implementations • ICLR 2022 • Hengrui Jia, Hongyu Chen, Jonas Guan, Ali Shahin Shamsabadi, Nicolas Papernot
In this paper, we instead propose to compute distance between black-box models by comparing their Local Interpretable Model-Agnostic Explanations (LIME).
no code implementations • 29 Sep 2021 • Ali Shahin Shamsabadi, Nicolas Papernot
In this paper, we are the first to observe that some of this performance can be recovered when training with a loss tailored to DP-SGD; we challenge cross-entropy as the de facto loss for deep learning with DP.
2 code implementations • 17 Nov 2020 • Ali Shahin Shamsabadi, Francisco Sepúlveda Teixeira, Alberto Abad, Bhiksha Raj, Andrea Cavallaro, Isabel Trancoso
Speaker identification models are vulnerable to carefully designed adversarial perturbations of their input signals that induce misclassification.
2 code implementations • 13 Aug 2020 • Ali Shahin Shamsabadi, Changjae Oh, Andrea Cavallaro
The proposed framework combines a structure loss and a semantic adversarial loss in a multi-task objective function to train a fully convolutional neural network.
1 code implementation • 19 Jul 2020 • Ricardo Sanchez-Matilla, Chau Yi Li, Ali Shahin Shamsabadi, Riccardo Mazzon, Andrea Cavallaro
To address these limitations, we present an adversarial attack {that is} specifically designed to protect visual content against { unseen} classifiers and known defenses.
1 code implementation • 12 Apr 2020 • Ali Shahin Shamsabadi, Adria Gascon, Hamed Haddadi, Andrea Cavallaro
To address this problem, we propose PrivEdge, a technique for privacy-preserving MLaaS that safeguards the privacy of users who provide their data for training, as well as users who use the prediction service.
2 code implementations • 12 Apr 2020 • Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Soteris Demetriou, Ilias Leontiadis, Andrea Cavallaro, Hamed Haddadi
We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).
2 code implementations • CVPR 2020 • Ali Shahin Shamsabadi, Ricardo Sanchez-Matilla, Andrea Cavallaro
Instead, adversarial attacks that generate unrestricted perturbations are more robust to defenses, are generally more successful in black-box settings and are more transferable to unseen classifiers.
2 code implementations • 27 Oct 2019 • Ali Shahin Shamsabadi, Changjae Oh, Andrea Cavallaro
This loss function accounts for both image detail enhancement and class misleading objectives.
no code implementations • 13 Jul 2019 • Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Andrea Cavallaro, Hamed Haddadi
Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models.
no code implementations • 8 Jul 2019 • Nitin Agrawal, Ali Shahin Shamsabadi, Matt J. Kusner, Adrià Gascón
In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts.
no code implementations • 10 Feb 2018 • Ali Shahin Shamsabadi, Hamed Haddadi, Andrea Cavallaro
A major advantage of the proposed filter over existing distributed learning approaches is that users cannot access, even indirectly, the parameters of other users.
1 code implementation • 9 Feb 2018 • Seyed Ali Osia, Ali Taheri, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, Hamid R. Rabiee
We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on information theoretic constraints.
1 code implementation • 4 Oct 2017 • Seyed Ali Osia, Ali Shahin Shamsabadi, Ali Taheri, Kleomenis Katevas, Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi
Our evaluations show that by using certain kind of fine-tuning and embedding techniques and at a small processing costs, we can greatly reduce the level of information available to unintended tasks applied to the data feature on the cloud, and hence achieving the desired tradeoff between privacy and performance.
1 code implementation • 8 Mar 2017 • Seyed Ali Osia, Ali Shahin Shamsabadi, Sina Sajadmanesh, Ali Taheri, Kleomenis Katevas, Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi
To this end, instead of performing the whole operation on the cloud, we let an IoT device to run the initial layers of the neural network, and then send the output to the cloud to feed the remaining layers and produce the final result.