Search Results for author: Neil Zhenqiang Gong

Found 69 papers, 24 papers with code

Stealing Hyperparameters in Machine Learning

no code implementations14 Feb 2018 Binghui Wang, Neil Zhenqiang Gong

In this work, we propose attacks on stealing the hyperparameters that are learned by a learner.

BIG-bench Machine Learning regression

AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning

1 code implementation13 May 2018 Jinyuan Jia, Neil Zhenqiang Gong

Specifically, game-theoretic defenses require solving intractable optimization problems, while correlation-based defenses incur large utility loss of users' public data.

Attribute BIG-bench Machine Learning

Poisoning Attacks to Graph-Based Recommender Systems

no code implementations11 Sep 2018 Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu

To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users.

Recommendation Systems

Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation

no code implementations4 Dec 2018 Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong

To address the computational challenge, we propose to jointly learn the edge weights and propagate the reputation scores, which is essentially an approximate solution to the optimization problem.

Attribute General Classification +2

Attacking Graph-based Classification via Manipulating the Graph Structure

no code implementations1 Mar 2019 Binghui Wang, Neil Zhenqiang Gong

Results show that our attacks 1) can effectively evade graph-based classification methods; 2) do not require access to the true parameters, true training dataset, and/or complete graph; and 3) outperform the existing attack for evading collective classification methods and some graph neural network methods.

Cryptography and Security

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

no code implementations17 Sep 2019 Jinyuan Jia, Neil Zhenqiang Gong

To defend against inference attacks, we can add carefully crafted noise into the public data to turn them into adversarial examples, such that attackers' classifiers make incorrect predictions for the private data.

BIG-bench Machine Learning Inference Attack

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

3 code implementations23 Sep 2019 Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong

Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.

Inference Attack Membership Inference Attack

Data Poisoning Attacks to Local Differential Privacy Protocols

no code implementations5 Nov 2019 Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Local Differential Privacy (LDP) protocols enable an untrusted data collector to perform privacy-preserving data analytics.

Data Poisoning Cryptography and Security Distributed, Parallel, and Cluster Computing

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

no code implementations26 Nov 2019 Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.

BIG-bench Machine Learning Data Poisoning +2

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

1 code implementation ICLR 2020 Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong

For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62. 8\% when the $\ell_2$-norms of the adversarial perturbations are less than 0. 5 (=127/255).

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

no code implementations19 Feb 2020 Minghong Fang, Neil Zhenqiang Gong, Jia Liu

Given the number of fake users the attacker can inject, we formulate the crafting of rating scores for the fake users as an optimization problem.

Data Poisoning Recommendation Systems

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing

no code implementations26 Feb 2020 Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Specifically, in this work, we study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.

Backdoor Attack

Stealing Links from Graph Neural Networks

no code implementations5 May 2020 Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang

In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.

Fraud Detection Recommendation Systems

Backdoor Attacks to Graph Neural Networks

2 code implementations19 Jun 2020 Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Specifically, we propose a \emph{subgraph based backdoor attack} to GNN for graph classification.

Backdoor Attack General Classification +2

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

1 code implementation11 Aug 2020 Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we show that bagging with an arbitrary base learning algorithm provably predicts the same label for a testing example when the number of modified, deleted, and/or inserted training examples is bounded by a threshold.

Data Poisoning Ensemble Learning

On the Intrinsic Differential Privacy of Bagging

no code implementations22 Aug 2020 Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

Bagging, a popular ensemble learning framework, randomly creates some subsamples of the training data, trains a base model for each subsample using a base learner, and takes majority vote among the base models when making predictions.

BIG-bench Machine Learning Ensemble Learning

Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation

no code implementations24 Aug 2020 Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation.

Cryptography and Security

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

no code implementations26 Oct 2020 Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Moreover, to be robust against post-processing, we leverage Turbo codes, a type of error-correcting codes, to encode the message before embedding it to the DNN classifier.

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

no code implementations ICLR 2022 Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong

For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.

Recommendation Systems

Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks

no code implementations7 Dec 2020 Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong

Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses.

Data Poisoning

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

1 code implementation27 Dec 2020 Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong

Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.

Federated Learning

Practical Blind Membership Inference Attack via Differential Comparisons

1 code implementation5 Jan 2021 Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.

Inference Attack Membership Inference Attack

Data Poisoning Attacks to Deep Learning Based Recommender Systems

no code implementations7 Jan 2021 Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, Mingwei Xu

Specifically, we formulate our attack as an optimization problem, such that the injected ratings would maximize the number of normal users to whom the target items are recommended.

Data Poisoning Recommendation Systems

Provably Secure Federated Learning against Malicious Clients

no code implementations3 Feb 2021 Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients.

Federated Learning Human Activity Recognition

Data Poisoning Attacks and Defenses to Crowdsourcing Systems

no code implementations18 Feb 2021 Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu

Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.

Data Poisoning

PointGuard: Provably Robust 3D Point Cloud Classification

no code implementations CVPR 2021 Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

Our first major theoretical contribution is that we show PointGuard provably predicts the same label for a 3D point cloud when the number of adversarially modified, added, and/or deleted points is bounded.

3D Point Cloud Classification Autonomous Driving +4

Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation

1 code implementation28 May 2021 Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang

Inspired by the idea of vector quantization that uses cluster centroids to approximate items, we propose LISA (LInear-time Self Attention), which enjoys both the effectiveness of vanilla self-attention and the efficiency of sparse attention.

Quantization Sequential Recommendation

Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention

no code implementations28 May 2021 Yongji Wu, Lu Yin, Defu Lian, Mingyang Yin, Neil Zhenqiang Gong, Jingren Zhou, Hongxia Yang

With the rapid development of these services in the last two decades, users have accumulated a massive amount of behavior data.

Sequential Recommendation

Understanding the Security of Deepfake Detection

no code implementations5 Jul 2021 Xiaoyu Cao, Neil Zhenqiang Gong

Existing studies mainly focused on improving the detection performance in non-adversarial settings, leaving security of deepfake detection in adversarial settings largely unexplored.

DeepFake Detection Face Swapping

BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

3 code implementations1 Aug 2021 Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong

In particular, our BadEncoder injects backdoors into a pre-trained image encoder such that the downstream classifiers built based on the backdoored image encoder for different downstream tasks simultaneously inherit the backdoor behavior.

Backdoor Attack Self-Supervised Learning

EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning

no code implementations25 Aug 2021 Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong

EncoderMI can be used 1) by a data owner to audit whether its (public) data was used to pre-train an image encoder without its authorization or 2) by an attacker to compromise privacy of the training data when it is private/sensitive.

Contrastive Learning

FaceGuard: Proactive Deepfake Detection

no code implementations13 Sep 2021 Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, Neil Zhenqiang Gong

A key limitation of passive detection is that it cannot detect fake faces that are generated by new deepfake generation methods.

DeepFake Detection Face Swapping

HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

1 code implementation23 Nov 2021 Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen

We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance.

Quantization

StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning

no code implementations15 Jan 2022 Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong

A pre-trained encoder may be deemed confidential because its training requires lots of data and computation resources as well as its public release may facilitate misuse of AI, e. g., for deepfakes generation.

Self-Supervised Learning

MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

1 code implementation16 Mar 2022 Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs.

Federated Learning Model Poisoning

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

1 code implementation19 Jul 2022 Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients.

Federated Learning Model Poisoning

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

1 code implementation25 Jul 2022 Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang

The results show that early stopping can mitigate the membership inference attack, but with the cost of model's utility degradation.

Data Augmentation Inference Attack +1

FLCert: Provably Secure Federated Learning against Poisoning Attacks

no code implementations2 Oct 2022 Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong

Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input.

Federated Learning

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

1 code implementation3 Oct 2022 Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong

In this work, we propose MultiGuard, the first provably robust defense against adversarial examples to multi-label classification.

Classification Multi-class Classification +1

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

no code implementations20 Oct 2022 Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong

Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them.

Federated Learning

CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning

no code implementations15 Nov 2022 Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

In this work, we take the first step to analyze the limitations of existing backdoor attacks and propose new DPBAs called CorruptEncoder to CL.

Contrastive Learning Data Poisoning

Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning

no code implementations6 Dec 2022 Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong

In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.

Data Poisoning Machine Unlearning +2

AFLGuard: Byzantine-robust Asynchronous Federated Learning

no code implementations13 Dec 2022 Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley

Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates.

Federated Learning

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

no code implementations7 Jan 2023 Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong

For the first question, we show that the cloud service only needs to provide two APIs, which we carefully design, to enable a client to certify the robustness of its downstream classifier with a minimal number of queries to the APIs.

Self-Supervised Learning

PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees

no code implementations CVPR 2023 Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong

Existing certified defenses against adversarial point clouds suffer from a key limitation: their certified robustness guarantees are probabilistic, i. e., they produce an incorrect certified robustness guarantee with some probability.

Autonomous Driving Classification +1

PORE: Provably Robust Recommender Systems against Data Poisoning Attacks

1 code implementation26 Mar 2023 Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong

PORE can transform any existing recommender system to be provably robust against any untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system.

Data Poisoning Recommendation Systems

Evading Watermark based Detection of AI-Generated Content

1 code implementation5 May 2023 Zhengyuan Jiang, Jinghuai Zhang, Neil Zhenqiang Gong

Specifically, a watermark is embedded into an AI-generated content before it is released.

Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework

no code implementations11 Jun 2023 Minglei Yin, Bin Liu, Neil Zhenqiang Gong, Xin Li

Our proposed method can simultaneously (1) secure VARS from adversarial attacks characterized by local perturbations by image reconstruction based on global vision transformers; and (2) accurately detect adversarial examples using a novel contrastive learning approach.

Contrastive Learning Image Reconstruction +1

DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks

1 code implementation29 Sep 2023 Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie

Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks.

Logical Reasoning

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use

1 code implementation4 Oct 2023 Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun

However, in scenarios where LLMs serve as intelligent agents, as seen in applications like AutoGPT and MetaGPT, LLMs are expected to engage in intricate decision-making processes that involve deciding whether to employ a tool and selecting the most suitable tool(s) from a collection of available tools to fulfill user requests.

Decision Making

Prompt Injection Attacks and Defenses in LLM-Integrated Applications

1 code implementation19 Oct 2023 Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong

As a result, the literature lacks a systematic understanding of prompt injection attacks and their defenses.

Competitive Advantage Attacks to Decentralized Federated Learning

no code implementations20 Oct 2023 Yuqi Jia, Minghong Fang, Neil Zhenqiang Gong

In SelfishAttack, a set of selfish clients aim to achieve competitive advantages over the remaining non-selfish ones, i. e., the final learnt local models of the selfish clients are more accurate than those of the non-selfish ones.

Federated Learning

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

1 code implementation3 Dec 2023 Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen

This process allows local devices to train smaller surrogate models while enabling the training of a larger global model on the server, effectively minimizing resource utilization.

Federated Learning

Poisoning Federated Recommender Systems with Fake Users

no code implementations18 Feb 2024 Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong

Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity.

Federated Learning Recommendation Systems

Visual Hallucinations of Multi-modal Large Language Models

1 code implementation22 Feb 2024 Wen Huang, Hongbin Liu, Minxin Guo, Neil Zhenqiang Gong

We find that existing MLLMs such as GPT-4V, LLaVA-1. 5, and MiniGPT-v2 hallucinate for a large fraction of the instances in our benchmark.

Hallucination Question Answering +1

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

no code implementations22 Feb 2024 Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong

However, foundation models are vulnerable to backdoor attacks and a backdoored foundation model is a single-point-of-failure of the AI ecosystem, e. g., multiple downstream classifiers inherit the backdoor vulnerabilities simultaneously.

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

no code implementations5 Mar 2024 Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong

Recent studies have revealed that federated learning (FL), once considered secure due to clients not sharing their private data with the server, is vulnerable to attacks such as client-side training data distribution inference, where a malicious client can recreate the victim's data.

Federated Learning

Optimization-based Prompt Injection Attack to LLM-as-a-Judge

no code implementations26 Mar 2024 Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong

LLM-as-a-Judge is a novel solution that can assess textual information with large language models (LLMs).

Decision Making

Watermark-based Detection and Attribution of AI-Generated Content

no code implementations5 Apr 2024 Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Neil Zhenqiang Gong

Several companies--such as Google, Microsoft, and OpenAI--have deployed techniques to watermark AI-generated content to enable proactive detection.

SoK: Gradient Leakage in Federated Learning

no code implementations8 Apr 2024 Jiacheng Du, Jiahui Hu, Zhibo Wang, Peng Sun, Neil Zhenqiang Gong, Kui Ren

While GIAs have demonstrated effectiveness under \emph{ideal settings and auxiliary assumptions}, their actual efficacy against \emph{practical FL systems} remains under-explored.

Federated Learning Misconceptions

Cannot find the paper you are looking for? You can Submit a new open access paper.