Search Results for author: Binghui Wang

Found 40 papers, 18 papers with code

Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach

no code implementations24 Sep 2024 Ruo Yang, Binghui Wang, Mustafa Bilgic

For image classifiers, these methods typically provide an attribution score to each pixel in the image to quantify its contribution to the prediction.

Graph Neural Network Causal Explanation via Neural Causal Models

1 code implementation12 Jul 2024 Arman Behnam, Binghui Wang

Our explainer is based on the observation that a graph often consists of a causal underlying subgraph.

Causal Inference Graph Neural Network

Graph Neural Network Explanations are Fragile

1 code implementation5 Jun 2024 Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang

Explainable Graph Neural Network (GNN) has emerged recently to foster the trust of using GNNs.

Adversarial Attack Graph Neural Network

Identifying Backdoored Graphs in Graph Neural Network Training: An Explanation-Based Approach with Novel Metrics

no code implementations26 Mar 2024 Jane Downer, Ren Wang, Binghui Wang

Graph Neural Networks (GNNs) have gained popularity in numerous domains, yet they are vulnerable to backdoor attacks that can compromise their performance and ethical application.

Graph Neural Network

Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks

1 code implementation4 Mar 2024 Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang

Machine learning (ML) is vulnerable to inference (e. g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset.

Inference Attack Privacy Preserving +1

PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models

1 code implementation12 Feb 2024 Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia

Based on this attack surface, we propose PoisonedRAG, the first knowledge corruption attack to RAG, where an attacker could inject a few malicious texts into the knowledge database of a RAG system to induce an LLM to generate an attacker-chosen target answer for an attacker-chosen target question.

Answer Generation Hallucination +2

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

no code implementations31 Jul 2023 Xinyu Zhang, Hanbin Hong, Yuan Hong, Peng Huang, Binghui Wang, Zhongjie Ba, Kui Ren

The language models, especially the basic text classification models, have been shown to be susceptible to textual adversarial attacks such as synonym substitution and word insertion attacks.

text-classification Text Classification

Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence

1 code implementation10 Apr 2023 Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, Yuan Hong

Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs).

Benchmarking speech-recognition +1

A Certified Radius-Guided Attack Framework to Image Segmentation Models

1 code implementation5 Apr 2023 Wenjie Qu, Youqi Li, Binghui Wang

We are the first, from the attacker perspective, to leverage the properties of certified radius and propose a certified radius guided attack framework against image segmentation models.

Image Classification Image Segmentation +2

IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients

1 code implementation CVPR 2023 Ruo Yang, Binghui Wang, Mustafa Bilgic

Integrated Gradients (IG) as well as its variants are well-known techniques for interpreting the decisions of deep neural networks.

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

no code implementations5 Jul 2022 Hanbin Hong, Binghui Wang, Yuan Hong

We study certified robustness of machine learning classifiers against adversarial perturbations.

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

1 code implementation11 Jun 2022 Nuo Xu, Binghui Wang, Ran Ran, Wujie Wen, Parv Venkitasubramaniam

Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risks for the training dataset used in the model training.

Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

1 code implementation CVPR 2022 Binghui Wang, Youqi Li, Pan Zhou

We then propose an online attack based on bandit optimization which is proven to be {sublinear} to the query number $T$, i. e., $\mathcal{O}(\sqrt{N}T^{3/4})$ where $N$ is the number of nodes in the graph.

Graph Classification Node Classification

Bandits for Black-box Attacks to Graph Neural Networks with Structure Perturbation

no code implementations29 Sep 2021 Binghui Wang, Youqi Li, Pan Zhou

However, many recent works have demonstrated that an attacker can mislead GNN models by slightly perturbing the graph structure.

Graph Classification Node Classification

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

no code implementations21 Aug 2021 Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu

We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation.

Adversarial Attack Graph Classification +2

Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective

no code implementations3 Jul 2021 Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, Hai Li

Existing representation learning methods on graphs have achieved state-of-the-art performance on various graph-related tasks such as node classification, link prediction, etc.

Link Prediction Node Classification +2

Soteria: Provable Defense Against Privacy Leakage in Federated Learning From Representation Perspective

1 code implementation CVPR 2021 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.

Federated Learning Inference Attack

Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

4 code implementations8 Dec 2020 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.

Federated Learning

GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs

no code implementations8 Dec 2020 Binghui Wang, Ang Li, Hai Li, Yiran Chen

However, existing FL methods 1) perform poorly when data across clients are non-IID, 2) cannot handle data with new label domains, and 3) cannot leverage unlabeled data, while all these issues naturally happen in real-world graph-based problems.

Federated Learning General Classification +2

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

no code implementations ICLR 2022 Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong

For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.

Recommendation Systems

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

no code implementations26 Oct 2020 Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Moreover, to be robust against post-processing, we leverage Turbo codes, a type of error-correcting codes, to encode the message before embedding it to the DNN classifier.

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

no code implementations1 Sep 2020 Houxiang Fan, Binghui Wang, Pan Zhou, Ang Li, Meng Pang, Zichuan Xu, Cai Fu, Hai Li, Yiran Chen

Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc.

Graph Embedding Link Prediction +2

Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function

1 code implementation1 Sep 2020 Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Hai Li, Yiran Chen

Specifically, we first introduce two influence functions, i. e., feature-label influence and label influence, that are defined on GNNs and label propagation (LP), respectively.

Graph Neural Network Node Classification

Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation

no code implementations24 Aug 2020 Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation.

Cryptography and Security

LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets

1 code implementation7 Aug 2020 Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, Hai Li

Rather than learning a shared global model in classic federated learning, each client learns a personalized model via LotteryFL; the communication cost can be significantly reduced due to the compact size of lottery networks.

Federated Learning

Backdoor Attacks to Graph Neural Networks

2 code implementations19 Jun 2020 Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Specifically, we propose a \emph{subgraph based backdoor attack} to GNN for graph classification.

Backdoor Attack General Classification +2

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing

no code implementations26 Feb 2020 Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Specifically, in this work, we study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.

Backdoor Attack

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

1 code implementation ICLR 2020 Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong

For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62. 8\% when the $\ell_2$-norms of the adversarial perturbations are less than 0. 5 (=127/255).

Attacking Graph-based Classification via Manipulating the Graph Structure

no code implementations1 Mar 2019 Binghui Wang, Neil Zhenqiang Gong

Results show that our attacks 1) can effectively evade graph-based classification methods; 2) do not require access to the true parameters, true training dataset, and/or complete graph; and 3) outperform the existing attack for evading collective classification methods and some graph neural network methods.

Cryptography and Security

Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation

no code implementations4 Dec 2018 Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong

To address the computational challenge, we propose to jointly learn the edge weights and propagate the reputation scores, which is essentially an approximate solution to the optimization problem.

Attribute General Classification +2

Stealing Hyperparameters in Machine Learning

no code implementations14 Feb 2018 Binghui Wang, Neil Zhenqiang Gong

In this work, we propose attacks on stealing the hyperparameters that are learned by a learner.

BIG-bench Machine Learning regression

Robust Multi-subspace Analysis Using Novel Column L0-norm Constrained Matrix Factorization

no code implementations27 Jan 2018 Binghui Wang, Chuang Lin

To tackle all these problems, we propose a method, called Matrix Factorization with Column L0-norm constraint (MFC0), that can simultaneously learn the basis for each subspace, generate a direct sparse representation for each data sample, as well as removing errors in the data in an efficient way.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.