Search Results for author: Chenwang Wu

Found 12 papers, 5 papers with code

Understanding Privacy Risks of Embeddings Induced by Large Language Models

no code implementations25 Apr 2024 Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen

Large language models (LLMs) show early signs of artificial general intelligence but struggle with hallucinations.

Securing Recommender System via Cooperative Training

1 code implementation23 Jan 2024 Qingyang Wang, Chenwang Wu, Defu Lian, Enhong Chen

Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack's attack potential in the cooperative training of attack and defense.

Recommendation Systems

Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity

no code implementations18 Dec 2023 Zhihao Zhu, Chenwang Wu, Rui Fan, Yi Yang, Defu Lian, Enhong Chen

Recent research demonstrates that GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.

Active Learning Graph Classification +1

Model Stealing Attack against Recommender System

no code implementations18 Dec 2023 Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen

Some adversarial attacks have achieved model stealing attacks against recommender systems, to some extent, by collecting abundant training data of the target model (target data) or making a mass of queries.

Recommendation Systems

Toward Robust Recommendation via Real-time Vicinal Defense

no code implementations29 Sep 2023 Yichang Xu, Chenwang Wu, Defu Lian

Recommender systems have been shown to be vulnerable to poisoning attacks, where malicious data is injected into the dataset to cause the recommender system to provide biased recommendations.

Recommendation Systems

Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation

no code implementations15 Nov 2022 Zhihao Zhu, Chenwang Wu, Min Zhou, Hao Liao, Defu Lian, Enhong Chen

Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications.

Adversarial Attack

Transposed Variational Auto-encoder with Intrinsic Feature Learning for Traffic Forecasting

2 code implementations30 Oct 2022 Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou

In this technical report, we present our solutions to the Traffic4cast 2022 core challenge and extended challenge.

feature selection Graph Attention

Towards Robust Recommender Systems via Triple Cooperative Defense

no code implementations25 Oct 2022 Qingyang Wang, Defu Lian, Chenwang Wu, Enhong Chen

Notably, TCD adds pseudo label data instead of deleting abnormal data, which avoids the cleaning of normal data, and the cooperative training of the three models is also beneficial to model generalization.

Pseudo Label Recommendation Systems

Boosting Factorization Machines via Saliency-Guided Mixup

1 code implementation17 Jun 2022 Chenwang Wu, Defu Lian, Yong Ge, Min Zhou, Enhong Chen, DaCheng Tao

Second, considering that MixFM may generate redundant or even detrimental instances, we further put forward a novel Factorization Machine powered by Saliency-guided Mixup (denoted as SMFM).

Recommendation Systems

Random Directional Attack for Fooling Deep Neural Networks

1 code implementation6 Aug 2019 Wenjian Luo, Chenwang Wu, Nan Zhou, Li Ni

Unfortunately, as the model is nonlinear in most cases, the addition of perturbations in the gradient direction does not necessarily increase loss.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.