Search Results for author: FatemehSadat Mireshghallah

Found 15 papers, 8 papers with code

Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness

1 code implementation10 Sep 2021 FatemehSadat Mireshghallah, Taylor Berg-Kirkpatrick

Text style can reveal sensitive attributes of the author (e. g. race or age) to the reader, which can, in turn, lead to privacy violations and bias in both human and algorithmic decisions based on text.

Classification Fairness +1

Efficient Hyperparameter Optimization for Differentially Private Deep Learning

1 code implementation9 Aug 2021 Aman Priyanshu, Rakshit Naidu, FatemehSadat Mireshghallah, Mohammad Malekzadeh

Tuning the hyperparameters in the differentially private stochastic gradient descent (DPSGD) is a fundamental challenge.

Hyperparameter Optimization

Benchmarking Differential Privacy and Federated Learning for BERT Models

2 code implementations26 Jun 2021 Priyam Basu, Tiasa Singha Roy, Rakshit Naidu, Zumrut Muftuoglu, Sahib Singh, FatemehSadat Mireshghallah

Natural Language Processing (NLP) techniques can be applied to help with the diagnosis of medical conditions such as depression, using a collection of a person's utterances.

Federated Learning

When Differential Privacy Meets Interpretability: A Case Study

no code implementations24 Jun 2021 Rakshit Naidu, Aman Priyanshu, Aadith Kumar, Sasikanth Kotti, Haofan Wang, FatemehSadat Mireshghallah

Given the increase in the use of personal data for training Deep Neural Networks (DNNs) in tasks such as medical imaging and diagnosis, differentially private training of DNNs is surging in importance and there is a large body of work focusing on providing better privacy-utility trade-off.

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

1 code implementation22 Jun 2021 Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, FatemehSadat Mireshghallah, Andrew Trask

Recent advances in differentially private deep learning have demonstrated that application of differential privacy, specifically the DP-SGD algorithm, has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones.

Fairness

Privacy Regularization: Joint Privacy-Utility Optimization in LanguageModels

no code implementations NAACL 2021 FatemehSadat Mireshghallah, Huseyin Inan, Marcello Hasegawa, Victor R{\"u}hle, Taylor Berg-Kirkpatrick, Robert Sim

In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a novel triplet-loss term.

Privacy Regularization: Joint Privacy-Utility Optimization in Language Models

no code implementations12 Mar 2021 FatemehSadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor Berg-Kirkpatrick, Robert Sim

In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a triplet-loss term.

U-Noise: Learnable Noise Masks for Interpretable Image Segmentation

1 code implementation14 Jan 2021 Teddy Koker, FatemehSadat Mireshghallah, Tom Titcombe, Georgios Kaissis

Deep Neural Networks (DNNs) are widely used for decision making in a myriad of critical applications, ranging from medical to societal and even judicial.

Decision Making Semantic Segmentation

WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATION

1 code implementation1 Jan 2021 Ahmed T. Elthakeb, Prannoy Pilligundla, Tarek Elgindi, FatemehSadat Mireshghallah, Charles-Alban Deledalle, Hadi Esmaeilzadeh

We show how WaveQ balance compute efficiency and accuracy, and provide a heterogeneous bitwidth assignment for quantization of a large variety of deep networks (AlexNet, CIFAR-10, MobileNet, ResNet-18, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy.

Quantization

Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

2 code implementations10 Sep 2020 Tom Farrand, FatemehSadat Mireshghallah, Sahib Singh, Andrew Trask

Deployment of deep learning in different fields and industries is growing day by day due to its performance, which relies on the availability of data and compute.

Fairness

Privacy in Deep Learning: A Survey

no code implementations25 Apr 2020 Fatemehsadat Mireshghallah, Mohammadkazem Taram, Praneeth Vepakomma, Abhishek Singh, Ramesh Raskar, Hadi Esmaeilzadeh

In this survey, we review the privacy concerns brought by deep learning, and the mitigating techniques introduced to tackle these issues.

Recommendation Systems

Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy

no code implementations26 Mar 2020 Fatemehsadat Mireshghallah, Mohammadkazem Taram, Ali Jalali, Ahmed Taha Elthakeb, Dean Tullsen, Hadi Esmaeilzadeh

We formulate this problem as a gradient-based perturbation maximization method that discovers this subset in the input feature space with respect to the functionality of the prediction model used by the provider.

WaveQ: Gradient-Based Deep Quantization of Neural Networks through Sinusoidal Adaptive Regularization

no code implementations29 Feb 2020 Ahmed T. Elthakeb, Prannoy Pilligundla, FatemehSadat Mireshghallah, Tarek Elgindi, Charles-Alban Deledalle, Hadi Esmaeilzadeh

We show how SINAREQ balance compute efficiency and accuracy, and provide a heterogeneous bitwidth assignment for quantization of a large variety of deep networks (AlexNet, CIFAR-10, MobileNet, ResNet-18, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy.

Quantization

Shredder: Learning Noise Distributions to Protect Inference Privacy

3 code implementations26 May 2019 Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Dean Tullsen, Hadi Esmaeilzadeh

To address this challenge, this paper devises Shredder, an end-to-end framework, that, without altering the topology or the weights of a pre-trained network, learns additive noise distributions that significantly reduce the information content of communicated data while maintaining the inference accuracy.

Image Classification

ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks

no code implementations5 Nov 2018 Ahmed T. Elthakeb, Prannoy Pilligundla, FatemehSadat Mireshghallah, Amir Yazdanbakhsh, Hadi Esmaeilzadeh

We show how ReLeQ can balance speed and quality, and provide an asymmetric general solution for quantization of a large variety of deep networks (AlexNet, CIFAR-10, LeNet, MobileNet-V1, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy (=< 0. 3% loss) while minimizing the computation and storage cost.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.