1 code implementation • 9 Feb 2024 • Yuntao Du, Ninghui Li
Data synthesis has been advocated as an important approach for utilizing data while protecting data privacy.
no code implementations • 2 Nov 2023 • Jiacheng Li, Ninghui Li, Bruno Ribeiro
Most MI attacks in the literature take advantage of the fact that ML models are trained to fit the training data well, and thus have very low loss on training instances.
2 code implementations • 2 Aug 2022 • Zitao Li, Tianhao Wang, Ninghui Li
To enable model learning while protecting the privacy of the data subjects, we need vertical federated learning (VFL) techniques, where the data parties share only information for training the model, instead of the private data.
no code implementations • 23 Jan 2022 • Shagufta Mehnaz, Sayanton V. Dibbo, Ehsanul Kabir, Ninghui Li, Elisa Bertino
Increasing use of machine learning (ML) technologies in privacy-sensitive domains such as medical diagnoses, lifestyle predictions, and business decisions highlights the need to better understand if these ML technologies are introducing leakage of sensitive and proprietary training data.
no code implementations • 17 Dec 2020 • Fabrizio Cicala, Weicheng Wang, Tianhao Wang, Ninghui Li, Elisa Bertino, Faming Liang, Yang Yang
Many proximity-based tracing (PCT) protocols have been proposed and deployed to combat the spreading of COVID-19.
Computers and Society C.3; H.4; J.3; J.7; K.4; K.6.5
no code implementations • 7 Dec 2020 • Shagufta Mehnaz, Ninghui Li, Elisa Bertino
In this paper, we focus on one kind of model inversion attacks, where the adversary knows non-sensitive attributes about instances in the training data and aims to infer the value of a sensitive attribute unknown to the adversary, using oracle access to the target classification model.
1 code implementation • 24 May 2020 • Tianhao Wang, Joann Qiongna Chen, Zhikun Zhang, Dong Su, Yueqiang Cheng, Zhou Li, Ninghui Li, Somesh Jha
To our knowledge, this is the first LDP algorithm for publishing streaming data.
1 code implementation • 27 Feb 2020 • Jiacheng Li, Ninghui Li, Bruno Ribeiro
We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier.
2 code implementations • 2 Dec 2019 • Zitao Li, Tianhao Wang, Milan Lopuhaä-Zwakenberg, Boris Skoric, Ninghui Li
When collecting information, local differential privacy (LDP) relieves the concern of privacy leakage from users' perspective, as user's private information is randomized before sent to the aggregator.
1 code implementation • 30 Aug 2019 • Tianhao Wang, Bolin Ding, Min Xu, Zhicong Huang, Cheng Hong, Jingren Zhou, Ninghui Li, Somesh Jha
When collecting information, local differential privacy (LDP) alleviates privacy concerns of users because their private information is randomized before being sent it to the central aggregator.
1 code implementation • 20 May 2019 • Tianhao Wang, Milan Lopuhaä-Zwakenberg, Zitao Li, Boris Skoric, Ninghui Li
In this paper, we show that adding post-processing steps to FO protocols by exploiting the knowledge that all individual frequencies should be non-negative and they sum up to one can lead to significantly better accuracy for a wide range of tasks, including frequencies of individual values, frequencies of the most frequent values, and frequencies of subsets of values.
1 code implementation • 5 Dec 2018 • Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li
Image classifiers often suffer from adversarial examples, which are generated by strategically adding a small amount of noise to input images to trick classifiers into misclassification.