Search Results for author: Ninghui Li

Found 12 papers, 8 papers with code

Systematic Assessment of Tabular Data Synthesis Algorithms

1 code implementation9 Feb 2024 Yuntao Du, Ninghui Li

Data synthesis has been advocated as an important approach for utilizing data while protecting data privacy.

Privacy Preserving

MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training

no code implementations2 Nov 2023 Jiacheng Li, Ninghui Li, Bruno Ribeiro

Most MI attacks in the literature take advantage of the fact that ML models are trained to fit the training data well, and thus have very low loss on training instances.

Representation Learning

Differentially Private Vertical Federated Clustering

2 code implementations2 Aug 2022 Zitao Li, Tianhao Wang, Ninghui Li

To enable model learning while protecting the privacy of the data subjects, we need vertical federated learning (VFL) techniques, where the data parties share only information for training the model, instead of the private data.

Clustering Vertical Federated Learning

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

no code implementations23 Jan 2022 Shagufta Mehnaz, Sayanton V. Dibbo, Ehsanul Kabir, Ninghui Li, Elisa Bertino

Increasing use of machine learning (ML) technologies in privacy-sensitive domains such as medical diagnoses, lifestyle predictions, and business decisions highlights the need to better understand if these ML technologies are introducing leakage of sensitive and proprietary training data.

Attribute Inference Attack

PURE: A Framework for Analyzing Proximity-based Contact Tracing Protocols

no code implementations17 Dec 2020 Fabrizio Cicala, Weicheng Wang, Tianhao Wang, Ninghui Li, Elisa Bertino, Faming Liang, Yang Yang

Many proximity-based tracing (PCT) protocols have been proposed and deployed to combat the spreading of COVID-19.

Computers and Society C.3; H.4; J.3; J.7; K.4; K.6.5

Black-box Model Inversion Attribute Inference Attacks on Classification Models

no code implementations7 Dec 2020 Shagufta Mehnaz, Ninghui Li, Elisa Bertino

In this paper, we focus on one kind of model inversion attacks, where the adversary knows non-sensitive attributes about instances in the training data and aims to infer the value of a sensitive attribute unknown to the adversary, using oracle access to the target classification model.

Attribute Classification +1

Membership Inference Attacks and Defenses in Classification Models

1 code implementation27 Feb 2020 Jiacheng Li, Ninghui Li, Bruno Ribeiro

We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier.

Classification General Classification

Estimating Numerical Distributions under Local Differential Privacy

2 code implementations2 Dec 2019 Zitao Li, Tianhao Wang, Milan Lopuhaä-Zwakenberg, Boris Skoric, Ninghui Li

When collecting information, local differential privacy (LDP) relieves the concern of privacy leakage from users' perspective, as user's private information is randomized before sent to the aggregator.

Improving Utility and Security of the Shuffler-based Differential Privacy

1 code implementation30 Aug 2019 Tianhao Wang, Bolin Ding, Min Xu, Zhicong Huang, Cheng Hong, Jingren Zhou, Ninghui Li, Somesh Jha

When collecting information, local differential privacy (LDP) alleviates privacy concerns of users because their private information is randomized before being sent it to the central aggregator.

Locally Differentially Private Frequency Estimation with Consistency

1 code implementation20 May 2019 Tianhao Wang, Milan Lopuhaä-Zwakenberg, Zitao Li, Boris Skoric, Ninghui Li

In this paper, we show that adding post-processing steps to FO protocols by exploiting the knowledge that all individual frequencies should be non-negative and they sum up to one can lead to significantly better accuracy for a wide range of tasks, including frequencies of individual values, frequencies of the most frequent values, and frequencies of subsets of values.

Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples

1 code implementation5 Dec 2018 Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li

Image classifiers often suffer from adversarial examples, which are generated by strategically adding a small amount of noise to input images to trick classifiers into misclassification.

Cannot find the paper you are looking for? You can Submit a new open access paper.