no code implementations • 25 Apr 2024 • Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen
Large language models (LLMs) show early signs of artificial general intelligence but struggle with hallucinations.
1 code implementation • 23 Jan 2024 • Qingyang Wang, Chenwang Wu, Defu Lian, Enhong Chen
Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack's attack potential in the cooperative training of attack and defense.
no code implementations • 18 Dec 2023 • Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen
Some adversarial attacks have achieved model stealing attacks against recommender systems, to some extent, by collecting abundant training data of the target model (target data) or making a mass of queries.
no code implementations • 18 Dec 2023 • Zhihao Zhu, Chenwang Wu, Rui Fan, Yi Yang, Zhen Wang, Defu Lian, Enhong Chen
Recent research demonstrates that GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
no code implementations • 29 Sep 2023 • Yichang Xu, Chenwang Wu, Defu Lian
Recommender systems have been shown to be vulnerable to poisoning attacks, where malicious data is injected into the dataset to cause the recommender system to provide biased recommendations.
no code implementations • 31 Jul 2023 • Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, Enhong Chen
The advent of large language models marks a revolutionary breakthrough in artificial intelligence.
1 code implementation • 14 Mar 2023 • Moritz Neun, Christian Eichenberger, Henry Martin, Markus Spanring, Rahul Siripurapu, Daniel Springer, Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou, Martin Lumiste, Andrei Ilie, Xinhua Wu, Cheng Lyu, Qing-Long Lu, Vishal Mahajan, Yichao Lu, Jiezhang Li, Junjun Li, Yue-Jiao Gong, Florian Grötschla, Joël Mathys, Ye Wei, He Haitao, Hui Fang, Kevin Malm, Fei Tang, Michael Kopp, David Kreil, Sepp Hochreiter
We only provide vehicle count data from spatially sparse stationary vehicle detectors in these three cities as model input for this task.
no code implementations • 15 Nov 2022 • Zhihao Zhu, Chenwang Wu, Min Zhou, Hao Liao, Defu Lian, Enhong Chen
Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications.
2 code implementations • 30 Oct 2022 • Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou
In this technical report, we present our solutions to the Traffic4cast 2022 core challenge and extended challenge.
no code implementations • 25 Oct 2022 • Qingyang Wang, Defu Lian, Chenwang Wu, Enhong Chen
Notably, TCD adds pseudo label data instead of deleting abnormal data, which avoids the cleaning of normal data, and the cooperative training of the three models is also beneficial to model generalization.
1 code implementation • 17 Jun 2022 • Chenwang Wu, Defu Lian, Yong Ge, Min Zhou, Enhong Chen, DaCheng Tao
Second, considering that MixFM may generate redundant or even detrimental instances, we further put forward a novel Factorization Machine powered by Saliency-guided Mixup (denoted as SMFM).
1 code implementation • 6 Aug 2019 • Wenjian Luo, Chenwang Wu, Nan Zhou, Li Ni
Unfortunately, as the model is nonlinear in most cases, the addition of perturbations in the gradient direction does not necessarily increase loss.