1 code implementation • 5 Sep 2024 • Rui Wen, Michael Backes, Yang Zhang
By analyzing the linkage between membership inference vulnerability and data importance, we demonstrate that sample characteristics can be integrated into membership metrics by introducing sample-specific criteria, therefore enhancing the membership inference performance.
no code implementations • 2 Sep 2024 • Rui Wen, Zheng Li, Michael Backes, Yang Zhang
Adapting Large Language Models (LLMs) to specific tasks introduces concerns about computational efficiency, prompting an exploration of efficient methods such as In-Context Learning (ICL).
1 code implementation • 12 Jun 2024 • Edoardo Debenedetti, Javier Rando, Daniel Paleka, Silaghi Fineas Florin, Dragos Albastroiu, Niv Cohen, Yuval Lemberg, Reshmi Ghosh, Rui Wen, Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Beguelin, Robin Schmid, Victor Klemm, Takahiro Miki, Chenhao Li, Stefan Kraft, Mario Fritz, Florian Tramèr, Sahar Abdelnabi, Lea Schönherr
To study this problem, we organized a capture-the-flag competition at IEEE SaTML 2024, where the flag is a secret string in the LLM system prompt.
no code implementations • 1 Apr 2024 • Wen-Jie Shu, Hong-Xia Dou, Rui Wen, Xiao Wu, Liang-Jian Deng
In response, we present the Cross Modulation Transformer (CMT), a pioneering method that modifies the attention mechanism.
1 code implementation • 14 Feb 2024 • Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang
The increasing demand for customized Large Language Models (LLMs) has led to the development of solutions like GPTs.
1 code implementation • 30 Oct 2023 • Minxing Zhang, Ning Yu, Rui Wen, Michael Backes, Yang Zhang
Several membership inference attacks (MIAs) have been proposed to exhibit the privacy vulnerability of generative models by classifying a query image as a training dataset member or nonmember.
no code implementations • 17 Oct 2023 • Rui Wen, Tianhao Wang, Michael Backes, Yang Zhang, Ahmed Salem
Large Language Models (LLMs) are powerful tools for natural language processing, enabling novel applications and user experiences.
1 code implementation • 20 Aug 2023 • Mingxuan Liu, Jie Gan, Rui Wen, Tao Li, Yongli Chen, Hong Chen
To fill the gap, we propose a Spiking-Diffusion model, which is based on the vector quantized discrete diffusion model.
Ranked #1 on
Image Generation
on EMNIST-Letters
1 code implementation • ACL 2021 • Hengyi Zheng, Rui Wen, Xi Chen, Yifan Yang, Yunyan Zhang, Ziheng Zhang, Ningyu Zhang, Bin Qin, Ming Xu, Yefeng Zheng
Joint extraction of entities and relations from unstructured texts is a crucial task in information extraction.
1 code implementation • 27 Feb 2021 • Zifeng Wang, Yifan Yang, Rui Wen, Xi Chen, Shao-Lun Huang, Yefeng Zheng
Current deep learning based disease diagnosis systems usually fall short in catastrophic forgetting, i. e., directly fine-tuning the disease diagnosis model on new tasks usually leads to abrupt decay of performance on previous tasks.
no code implementations • 10 Feb 2021 • Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang
To fully utilize the information contained in graph data, a new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.
1 code implementation • 4 Feb 2021 • Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang
As a result, we lack a comprehensive picture of the risks caused by the attacks, e. g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses.
no code implementations • 21 Jan 2021 • Yong-rui Chen, Rui Wen, Wei-jie Fu
Various critical exponents related to the order parameter, chiral susceptibilities and correlation lengths have been calculated for the 3-$d$ $O(4)$ and $Z(2)$ universality classes in the phase diagram, respectively.
High Energy Physics - Phenomenology
no code implementations • 1 Jan 2021 • Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang
In particular, BaN and c-BaN based on a novel generative network are the first two schemes that algorithmically generate triggers.
no code implementations • COLING 2022 • Zifeng Wang, Rui Wen, Xi Chen, Shao-Lun Huang, Ningyu Zhang, Yefeng Zheng
Distant supervision (DS) is a strong way to expand the datasets for enhancing relation extraction (RE) models but often suffers from high label noise.
no code implementations • 6 Sep 2020 • Zifeng Wang, Rui Wen, Xi Chen, Shilei Cao, Shao-Lun Huang, Buyue Qian, Yefeng Zheng
We propose a Healthcare Graph Convolutional Network (HealGCN) to offer disease self-diagnosis service for online users based on Electronic Healthcare Records (EHRs).
1 code implementation • NeurIPS 2020 • Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang, Ercan E. Kuruoglu, Yefeng Zheng
Counterfactual learning for dealing with missing-not-at-random data (MNAR) is an intriguing topic in the recommendation literature since MNAR data are ubiquitous in modern recommender systems.
no code implementations • 7 Mar 2020 • Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang
Triggers generated by our techniques can have random patterns and locations, which reduce the efficacy of the current backdoor detection mechanisms.
no code implementations • 25 Jan 2020 • Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang
Observed events in recommendation are consequence of the decisions made by a policy, thus they are usually selectively labeled, namely the data are Missing Not At Random (MNAR), which often causes large bias to the estimate of true outcomes risk.