Search Results for author: Ali Shahin Shamsabadi

Found 23 papers, 12 papers with code

Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey

no code implementations27 Sep 2023 Victoria Smith, Ali Shahin Shamsabadi, Carolyn Ashurst, Adrian Weller

To help researchers and policymakers understand the state of knowledge around privacy attacks and mitigations, including where more work is needed, we present the first technical survey on LM privacy.

Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation

no code implementations9 Jan 2023 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.

Federated Learning

On the reversibility of adversarial attacks

no code implementations1 Jun 2022 Chau Yi Li, Ricardo Sánchez-Matilla, Ali Shahin Shamsabadi, Riccardo Mazzon, Andrea Cavallaro

We refer to this property as the reversibility of an adversarial attack, and quantify reversibility as the accuracy in retrieving the original class or the true class of an adversarial example.

Adversarial Attack

Differentially Private Speaker Anonymization

no code implementations23 Feb 2022 Ali Shahin Shamsabadi, Brij Mohan Lal Srivastava, Aurélien Bellet, Nathalie Vauquier, Emmanuel Vincent, Mohamed Maouche, Marc Tommasi, Nicolas Papernot

We remove speaker information from these attributes by introducing differentially private feature extractors based on an autoencoder and an automatic speech recognizer, respectively, trained using noise layers.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

When the Curious Abandon Honesty: Federated Learning Is Not Private

1 code implementation6 Dec 2021 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.

Federated Learning Privacy Preserving +1

Losing Less: A Loss for Differentially Private Deep Learning

no code implementations29 Sep 2021 Ali Shahin Shamsabadi, Nicolas Papernot

In this paper, we are the first to observe that some of this performance can be recovered when training with a loss tailored to DP-SGD; we challenge cross-entropy as the de facto loss for deep learning with DP.

A Zest of LIME: Towards Architecture-Independent Model Distances

no code implementations ICLR 2022 Hengrui Jia, Hongyu Chen, Jonas Guan, Ali Shahin Shamsabadi, Nicolas Papernot

In this paper, we instead propose to compute distance between black-box models by comparing their Local Interpretable Model-Agnostic Explanations (LIME).

Machine Unlearning

Semantically Adversarial Learnable Filters

2 code implementations13 Aug 2020 Ali Shahin Shamsabadi, Changjae Oh, Andrea Cavallaro

The proposed framework combines a structure loss and a semantic adversarial loss in a multi-task objective function to train a fully convolutional neural network.

Exploiting vulnerabilities of deep neural networks for privacy protection

1 code implementation19 Jul 2020 Ricardo Sanchez-Matilla, Chau Yi Li, Ali Shahin Shamsabadi, Riccardo Mazzon, Andrea Cavallaro

To address these limitations, we present an adversarial attack {that is} specifically designed to protect visual content against { unseen} classifiers and known defenses.

Adversarial Attack Quantization

PrivEdge: From Local to Distributed Private Training and Prediction

1 code implementation12 Apr 2020 Ali Shahin Shamsabadi, Adria Gascon, Hamed Haddadi, Andrea Cavallaro

To address this problem, we propose PrivEdge, a technique for privacy-preserving MLaaS that safeguards the privacy of users who provide their data for training, as well as users who use the prediction service.

Image Compression Privacy Preserving

DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments

2 code implementations12 Apr 2020 Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Soteris Demetriou, Ilias Leontiadis, Andrea Cavallaro, Hamed Haddadi

We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).

Image Classification

ColorFool: Semantic Adversarial Colorization

2 code implementations CVPR 2020 Ali Shahin Shamsabadi, Ricardo Sanchez-Matilla, Andrea Cavallaro

Instead, adversarial attacks that generate unrestricted perturbations are more robust to defenses, are generally more successful in black-box settings and are more transferable to unseen classifiers.

Adversarial Attack Colorization +1

EdgeFool: An Adversarial Image Enhancement Filter

2 code implementations27 Oct 2019 Ali Shahin Shamsabadi, Changjae Oh, Andrea Cavallaro

This loss function accounts for both image detail enhancement and class misleading objectives.

Denoising Image Enhancement

Towards Characterizing and Limiting Information Exposure in DNN Layers

no code implementations13 Jul 2019 Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Andrea Cavallaro, Hamed Haddadi

Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models.

QUOTIENT: Two-Party Secure Neural Network Training and Prediction

no code implementations8 Jul 2019 Nitin Agrawal, Ali Shahin Shamsabadi, Matt J. Kusner, Adrià Gascón

In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts.

Vocal Bursts Valence Prediction

Distributed One-class Learning

no code implementations10 Feb 2018 Ali Shahin Shamsabadi, Hamed Haddadi, Andrea Cavallaro

A major advantage of the proposed filter over existing distributed learning approaches is that users cannot access, even indirectly, the parameters of other users.

Blocking One-class classifier

Deep Private-Feature Extraction

1 code implementation9 Feb 2018 Seyed Ali Osia, Ali Taheri, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, Hamid R. Rabiee

We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on information theoretic constraints.

Privacy-Preserving Deep Inference for Rich User Data on The Cloud

1 code implementation4 Oct 2017 Seyed Ali Osia, Ali Shahin Shamsabadi, Ali Taheri, Kleomenis Katevas, Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi

Our evaluations show that by using certain kind of fine-tuning and embedding techniques and at a small processing costs, we can greatly reduce the level of information available to unintended tasks applied to the data feature on the cloud, and hence achieving the desired tradeoff between privacy and performance.

Privacy Preserving

A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile Analytics

1 code implementation8 Mar 2017 Seyed Ali Osia, Ali Shahin Shamsabadi, Sina Sajadmanesh, Ali Taheri, Kleomenis Katevas, Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi

To this end, instead of performing the whole operation on the cloud, we let an IoT device to run the initial layers of the neural network, and then send the output to the cloud to feed the remaining layers and produce the final result.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.