no code implementations • EMNLP (PrivateNLP) 2020 • Abhinav Aggarwal, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
In this paper, we show that a malicious modeler, upon obtaining access to the Log-Loss scores on its predictions, can exploit this information to infer all the ground truth labels of arbitrary test datasets with full accuracy.
no code implementations • EMNLP (PrivateNLP) 2020 • Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, Nathanael Teissier
In this paper, we propose a text perturbation mechanism based on a carefully designed regularized variant of the Mahalanobis metric to overcome this problem.
no code implementations • 7 Jul 2021 • Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
Machine learning classifiers rely on loss functions for performance evaluation, often on a private (hidden) dataset.
no code implementations • 18 May 2021 • Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms.
no code implementations • NAACL (PrivateNLP) 2021 • Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, Nathanael Teissier
This is because the nearest neighbor to the noised input is likely to be the original input.
no code implementations • 10 Dec 2020 • Oluwaseyi Feyisetan, Abhinav Aggarwal, Zekun Xu, Nathanael Teissier
Such mechanisms add privacy preserving noise to vectorial representations of text in high dimension and return a text based projection of the noisy vectors.
no code implementations • 22 Oct 2020 • Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, Nathanael Teissier
Balancing the privacy-utility tradeoff is a crucial requirement of many practical machine learning systems that deal with sensitive customer data.
no code implementations • 27 Sep 2020 • Nan Xu, Oluwaseyi Feyisetan, Abhinav Aggarwal, Zekun Xu, Nathanael Teissier
Deep Neural Networks, despite their great success in diverse domains, are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions.
no code implementations • 17 Sep 2020 • Abhinav Aggarwal, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
Membership Inference Attacks exploit the vulnerabilities of exposing models trained on customer data to queries by an adversary.
no code implementations • 6 Aug 2018 • Cynthia Freeman, Jonathan Merriman, Abhinav Aggarwal, Ian Beaver, Abdullah Mueen
In (Yang et al. 2016), a hierarchical attention network (HAN) is created for document classification.