Search Results for author: Oluwaseyi Feyisetan

Found 13 papers, 1 papers with code

Private Release of Text Embedding Vectors

no code implementations NAACL (TrustNLP) 2021 Oluwaseyi Feyisetan, Shiva Kasiviswanathan

Ensuring strong theoretical privacy guarantees on text data is a challenging problem which is usually attained at the expense of utility.

Privacy Preserving

On Log-Loss Scores and (No) Privacy

no code implementations EMNLP (PrivateNLP) 2020 Abhinav Aggarwal, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

In this paper, we show that a malicious modeler, upon obtaining access to the Log-Loss scores on its predictions, can exploit this information to infer all the ground truth labels of arbitrary test datasets with full accuracy.

A Differentially Private Text Perturbation Method Using Regularized Mahalanobis Metric

no code implementations EMNLP (PrivateNLP) 2020 Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, Nathanael Teissier

In this paper, we propose a text perturbation mechanism based on a carefully designed regularized variant of the Mahalanobis metric to overcome this problem.

Privacy Preserving

Reconstructing Test Labels from Noisy Loss Functions

no code implementations7 Jul 2021 Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

Machine learning classifiers rely on loss functions for performance evaluation, often on a private (hidden) dataset.

Label Inference Attacks from Log-loss Scores

no code implementations18 May 2021 Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms.

Research Challenges in Designing Differentially Private Text Generation Mechanisms

no code implementations10 Dec 2020 Oluwaseyi Feyisetan, Abhinav Aggarwal, Zekun Xu, Nathanael Teissier

Such mechanisms add privacy preserving noise to vectorial representations of text in high dimension and return a text based projection of the noisy vectors.

Privacy Preserving Text Generation

A Differentially Private Text Perturbation Method Using a Regularized Mahalanobis Metric

no code implementations22 Oct 2020 Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, Nathanael Teissier

Balancing the privacy-utility tradeoff is a crucial requirement of many practical machine learning systems that deal with sensitive customer data.

Privacy Preserving

Differentially Private Adversarial Robustness Through Randomized Perturbations

no code implementations27 Sep 2020 Nan Xu, Oluwaseyi Feyisetan, Abhinav Aggarwal, Zekun Xu, Nathanael Teissier

Deep Neural Networks, despite their great success in diverse domains, are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions.

Adversarial Robustness Semantic Similarity +1

On Primes, Log-Loss Scores and (No) Privacy

no code implementations17 Sep 2020 Abhinav Aggarwal, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

Membership Inference Attacks exploit the vulnerabilities of exposing models trained on customer data to queries by an adversary.

Privacy- and Utility-Preserving Textual Analysis via Calibrated Multivariate Perturbations

1 code implementation20 Oct 2019 Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, Tom Diethe

We conduct privacy audit experiments against 2 baseline models and utility experiments on 3 datasets to demonstrate the tradeoff between privacy and utility for varying values of epsilon on different task types.

Privacy Preserving

Privacy-preserving Active Learning on Sensitive Data for User Intent Classification

no code implementations26 Mar 2019 Oluwaseyi Feyisetan, Thomas Drake, Borja Balle, Tom Diethe

Active learning holds promise of significantly reducing data annotation costs while maintaining reasonable model performance.

Active Learning Binary Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.