no code implementations • 24 Sep 2024 • Ruo Yang, Binghui Wang, Mustafa Bilgic
For image classifiers, these methods typically provide an attribution score to each pixel in the image to quantify its contribution to the prediction.
no code implementations • 22 Aug 2024 • Zifan Wang, Binghui Zhang, Meng Pang, Yuan Hong, Binghui Wang
Federated learning (FL) is an emerging collaborative learning paradigm that aims to protect data privacy.
1 code implementation • 20 Jul 2024 • Shuya Feng, Meisam Mohammady, Hanbin Hong, Shenao Yan, Ashish Kundu, Binghui Wang, Yuan Hong
DP-SGD) to significantly boost accuracy and convergence.
1 code implementation • 12 Jul 2024 • Arman Behnam, Binghui Wang
Our explainer is based on the observation that a graph often consists of a causal underlying subgraph.
1 code implementation • 5 Jun 2024 • Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang
Explainable Graph Neural Network (GNN) has emerged recently to foster the trust of using GNNs.
no code implementations • 26 Mar 2024 • Jane Downer, Ren Wang, Binghui Wang
Graph Neural Networks (GNNs) have gained popularity in numerous domains, yet they are vulnerable to backdoor attacks that can compromise their performance and ethical application.
1 code implementation • 4 Mar 2024 • Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang
Machine learning (ML) is vulnerable to inference (e. g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset.
1 code implementation • 12 Feb 2024 • Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia
Based on this attack surface, we propose PoisonedRAG, the first knowledge corruption attack to RAG, where an attacker could inject a few malicious texts into the knowledge database of a RAG system to induce an LLM to generate an attacker-chosen target answer for an attacker-chosen target question.
1 code implementation • IEEE Globecom Workshops (GC Wkshps) 2023 • Yaxin Yu, Yinglei Teng, Binghui Wang, An Liu, Vincent Lau
Finally, we demonstrate that the M-Net variants achieve the SOTA performance by only deep enning the M-Net decoder.
no code implementations • 31 Jul 2023 • Xinyu Zhang, Hanbin Hong, Yuan Hong, Peng Huang, Binghui Wang, Zhongjie Ba, Kui Ren
The language models, especially the basic text classification models, have been shown to be susceptible to textual adversarial attacks such as synonym substitution and word insertion attacks.
1 code implementation • 10 Apr 2023 • Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, Yuan Hong
Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs).
1 code implementation • 5 Apr 2023 • Wenjie Qu, Youqi Li, Binghui Wang
We are the first, from the attacker perspective, to leverage the properties of certified radius and propose a certified radius guided attack framework against image segmentation models.
1 code implementation • CVPR 2023 • Ruo Yang, Binghui Wang, Mustafa Bilgic
Integrated Gradients (IG) as well as its variants are well-known techniques for interpreting the decisions of deep neural networks.
no code implementations • 5 Jul 2022 • Hanbin Hong, Binghui Wang, Yuan Hong
We study certified robustness of machine learning classifiers against adversarial perturbations.
1 code implementation • 11 Jun 2022 • Nuo Xu, Binghui Wang, Ran Ran, Wujie Wen, Parv Venkitasubramaniam
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risks for the training dataset used in the model training.
1 code implementation • CVPR 2022 • Binghui Wang, Youqi Li, Pan Zhou
We then propose an online attack based on bandit optimization which is proven to be {sublinear} to the query number $T$, i. e., $\mathcal{O}(\sqrt{N}T^{3/4})$ where $N$ is the number of nodes in the graph.
no code implementations • 15 Oct 2021 • Bingbing Li, Hongwu Peng, Rajat Sainju, Junhuan Yang, Lei Yang, Yueying Liang, Weiwen Jiang, Binghui Wang, Hang Liu, Caiwen Ding
In this paper, we propose a novel gender bias detection method by utilizing attention map for transformer-based models.
no code implementations • 29 Sep 2021 • Binghui Wang, Youqi Li, Pan Zhou
However, many recent works have demonstrated that an attacker can mislead GNN models by slightly perturbing the graph structure.
no code implementations • 21 Aug 2021 • Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu
We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation.
no code implementations • 3 Jul 2021 • Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, Hai Li
Existing representation learning methods on graphs have achieved state-of-the-art performance on various graph-related tasks such as node classification, link prediction, etc.
1 code implementation • CVPR 2021 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.
1 code implementation • 22 Apr 2021 • Qiming Wu, Zhikang Zou, Pan Zhou, Xiaoqing Ye, Binghui Wang, Ang Li
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
no code implementations • 24 Dec 2020 • Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong
In this work, we aim to address the key limitation of existing pMRF-based methods.
4 code implementations • 8 Dec 2020 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
no code implementations • 8 Dec 2020 • Binghui Wang, Ang Li, Hai Li, Yiran Chen
However, existing FL methods 1) perform poorly when data across clients are non-IID, 2) cannot handle data with new label domains, and 3) cannot leverage unlabeled data, while all these issues naturally happen in real-world graph-based problems.
no code implementations • ICLR 2022 • Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
no code implementations • 26 Oct 2020 • Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong
Moreover, to be robust against post-processing, we leverage Turbo codes, a type of error-correcting codes, to encode the message before embedding it to the DNN classifier.
no code implementations • 1 Sep 2020 • Houxiang Fan, Binghui Wang, Pan Zhou, Ang Li, Meng Pang, Zichuan Xu, Cai Fu, Hai Li, Yiran Chen
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc.
1 code implementation • 1 Sep 2020 • Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Hai Li, Yiran Chen
Specifically, we first introduce two influence functions, i. e., feature-label influence and label influence, that are defined on GNNs and label propagation (LP), respectively.
no code implementations • 24 Aug 2020 • Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong
Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation.
Cryptography and Security
1 code implementation • 7 Aug 2020 • Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, Hai Li
Rather than learning a shared global model in classic federated learning, each client learns a personalized model via LotteryFL; the communication cost can be significantly reduced due to the compact size of lottery networks.
2 code implementations • 19 Jun 2020 • Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong
Specifically, we propose a \emph{subgraph based backdoor attack} to GNN for graph classification.
no code implementations • NeurIPS 2020 • Nathan Inkawhich, Kevin J Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
no code implementations • 26 Feb 2020 • Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
Specifically, in this work, we study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.
no code implementations • 9 Feb 2020 • Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong
However, several recent studies showed that community detection is vulnerable to adversarial structural perturbation.
1 code implementation • ICLR 2020 • Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong
For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62. 8\% when the $\ell_2$-norms of the adversarial perturbations are less than 0. 5 (=127/255).
no code implementations • 1 Mar 2019 • Binghui Wang, Neil Zhenqiang Gong
Results show that our attacks 1) can effectively evade graph-based classification methods; 2) do not require access to the true parameters, true training dataset, and/or complete graph; and 3) outperform the existing attack for evading collective classification methods and some graph neural network methods.
Cryptography and Security
no code implementations • 4 Dec 2018 • Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong
To address the computational challenge, we propose to jointly learn the edge weights and propagate the reputation scores, which is essentially an approximate solution to the optimization problem.
no code implementations • 14 Feb 2018 • Binghui Wang, Neil Zhenqiang Gong
In this work, we propose attacks on stealing the hyperparameters that are learned by a learner.
no code implementations • 27 Jan 2018 • Binghui Wang, Chuang Lin
To tackle all these problems, we propose a method, called Matrix Factorization with Column L0-norm constraint (MFC0), that can simultaneously learn the basis for each subspace, generate a direct sparse representation for each data sample, as well as removing errors in the data in an efficient way.