Search Results for author: Neil Zhenqiang Gong

Found 45 papers, 11 papers with code

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

1 code implementation3 Oct 2022 Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong

In this work, we propose MultiGuard, the first provably robust defense against adversarial examples to multi-label classification.

Classification Multi-class Classification +1

FLCert: Provably Secure Federated Learning against Poisoning Attacks

no code implementations2 Oct 2022 Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong

Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input.

Federated Learning

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

1 code implementation25 Jul 2022 Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang

The results show that early stopping can mitigate the membership inference attack, but with the cost of model's utility degradation.

Data Augmentation Inference Attack +1

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

1 code implementation19 Jul 2022 Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients.

Federated Learning Model Poisoning

MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

no code implementations16 Mar 2022 Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs.

Federated Learning Model Poisoning

StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning

no code implementations15 Jan 2022 Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong

A pre-trained encoder may be deemed confidential because its training requires lots of data and computation resources as well as its public release may facilitate misuse of AI, e. g., for deepfakes generation.

Self-Supervised Learning

HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

no code implementations23 Nov 2021 Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen

We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance.

Quantization

FaceGuard: Proactive Deepfake Detection

no code implementations13 Sep 2021 Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, Neil Zhenqiang Gong

A key limitation of passive detection is that it cannot detect fake faces that are generated by new deepfake generation methods.

DeepFake Detection Face Swapping

EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning

no code implementations25 Aug 2021 Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong

EncoderMI can be used 1) by a data owner to audit whether its (public) data was used to pre-train an image encoder without its authorization or 2) by an attacker to compromise privacy of the training data when it is private/sensitive.

Contrastive Learning

BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

1 code implementation1 Aug 2021 Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong

In particular, our BadEncoder injects backdoors into a pre-trained image encoder such that the downstream classifiers built based on the backdoored image encoder for different downstream tasks simultaneously inherit the backdoor behavior.

Backdoor Attack Self-Supervised Learning

Understanding the Security of Deepfake Detection

no code implementations5 Jul 2021 Xiaoyu Cao, Neil Zhenqiang Gong

Existing studies mainly focused on improving the detection performance in non-adversarial settings, leaving security of deepfake detection in adversarial settings largely unexplored.

DeepFake Detection Face Swapping

Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention

no code implementations28 May 2021 Yongji Wu, Lu Yin, Defu Lian, Mingyang Yin, Neil Zhenqiang Gong, Jingren Zhou, Hongxia Yang

With the rapid development of these services in the last two decades, users have accumulated a massive amount of behavior data.

Sequential Recommendation

Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation

1 code implementation28 May 2021 Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang

Inspired by the idea of vector quantization that uses cluster centroids to approximate items, we propose LISA (LInear-time Self Attention), which enjoys both the effectiveness of vanilla self-attention and the efficiency of sparse attention.

Quantization Sequential Recommendation

PointGuard: Provably Robust 3D Point Cloud Classification

no code implementations CVPR 2021 Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

Our first major theoretical contribution is that we show PointGuard provably predicts the same label for a 3D point cloud when the number of adversarially modified, added, and/or deleted points is bounded.

3D Point Cloud Classification Autonomous Driving +4

Data Poisoning Attacks and Defenses to Crowdsourcing Systems

no code implementations18 Feb 2021 Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu

Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.

Data Poisoning

Provably Secure Federated Learning against Malicious Clients

no code implementations3 Feb 2021 Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients.

Federated Learning Human Activity Recognition

Data Poisoning Attacks to Deep Learning Based Recommender Systems

no code implementations7 Jan 2021 Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, Mingwei Xu

Specifically, we formulate our attack as an optimization problem, such that the injected ratings would maximize the number of normal users to whom the target items are recommended.

Data Poisoning Recommendation Systems

Practical Blind Membership Inference Attack via Differential Comparisons

1 code implementation5 Jan 2021 Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.

Inference Attack Membership Inference Attack

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

no code implementations27 Dec 2020 Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong

Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.

Federated Learning

Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks

no code implementations7 Dec 2020 Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong

Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses.

Data Poisoning

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

no code implementations ICLR 2022 Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong

For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.

Recommendation Systems

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

no code implementations26 Oct 2020 Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Moreover, to be robust against post-processing, we leverage Turbo codes, a type of error-correcting codes, to encode the message before embedding it to the DNN classifier.

Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation

no code implementations24 Aug 2020 Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation.

Cryptography and Security

On the Intrinsic Differential Privacy of Bagging

no code implementations22 Aug 2020 Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

Bagging, a popular ensemble learning framework, randomly creates some subsamples of the training data, trains a base model for each subsample using a base learner, and takes majority vote among the base models when making predictions.

BIG-bench Machine Learning Ensemble Learning

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

1 code implementation11 Aug 2020 Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we show that bagging with an arbitrary base learning algorithm provably predicts the same label for a testing example when the number of modified, deleted, and/or inserted training examples is bounded by a threshold.

Data Poisoning Ensemble Learning

Backdoor Attacks to Graph Neural Networks

1 code implementation19 Jun 2020 Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Specifically, we propose a \emph{subgraph based backdoor attack} to GNN for graph classification.

Backdoor Attack General Classification +2

Stealing Links from Graph Neural Networks

no code implementations5 May 2020 Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang

In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.

Fraud Detection Recommendation Systems

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing

no code implementations26 Feb 2020 Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Specifically, in this work, we study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.

Backdoor Attack

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

no code implementations19 Feb 2020 Minghong Fang, Neil Zhenqiang Gong, Jia Liu

Given the number of fake users the attacker can inject, we formulate the crafting of rating scores for the fake users as an optimization problem.

Data Poisoning Recommendation Systems

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

1 code implementation ICLR 2020 Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong

For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62. 8\% when the $\ell_2$-norms of the adversarial perturbations are less than 0. 5 (=127/255).

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

no code implementations26 Nov 2019 Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.

BIG-bench Machine Learning Data Poisoning +2

Data Poisoning Attacks to Local Differential Privacy Protocols

no code implementations5 Nov 2019 Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Local Differential Privacy (LDP) protocols enable an untrusted data collector to perform privacy-preserving data analytics.

Data Poisoning Cryptography and Security Distributed, Parallel, and Cluster Computing

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

2 code implementations23 Sep 2019 Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong

Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.

Inference Attack Membership Inference Attack

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

no code implementations17 Sep 2019 Jinyuan Jia, Neil Zhenqiang Gong

To defend against inference attacks, we can add carefully crafted noise into the public data to turn them into adversarial examples, such that attackers' classifiers make incorrect predictions for the private data.

BIG-bench Machine Learning Inference Attack

Attacking Graph-based Classification via Manipulating the Graph Structure

no code implementations1 Mar 2019 Binghui Wang, Neil Zhenqiang Gong

Results show that our attacks 1) can effectively evade graph-based classification methods; 2) do not require access to the true parameters, true training dataset, and/or complete graph; and 3) outperform the existing attack for evading collective classification methods and some graph neural network methods.

Cryptography and Security

Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation

no code implementations4 Dec 2018 Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong

To address the computational challenge, we propose to jointly learn the edge weights and propagate the reputation scores, which is essentially an approximate solution to the optimization problem.

Classification General Classification +1

Poisoning Attacks to Graph-Based Recommender Systems

no code implementations11 Sep 2018 Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu

To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users.

Recommendation Systems

AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning

1 code implementation13 May 2018 Jinyuan Jia, Neil Zhenqiang Gong

Specifically, game-theoretic defenses require solving intractable optimization problems, while correlation-based defenses incur large utility loss of users' public data.

BIG-bench Machine Learning

Stealing Hyperparameters in Machine Learning

no code implementations14 Feb 2018 Binghui Wang, Neil Zhenqiang Gong

In this work, we propose attacks on stealing the hyperparameters that are learned by a learner.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.