Search Results for author: Qiongkai Xu

Found 33 papers, 15 papers with code

Personal Information Leakage Detection in Conversations

1 code implementation EMNLP 2020 Qiongkai Xu, Lizhen Qu, Zeyu Gao, Gholamreza Haffari

In this work, we propose to protect personal information by warning users of detected suspicious sentences generated by conversational assistants.

Language Modelling

IDT: Dual-Task Adversarial Attacks for Privacy Protection

no code implementations28 Jun 2024 Pedro Faustini, Shakila Mahjabin Tonni, Annabelle McIver, Qiongkai Xu, Mark Dras

This paper explores a novel adaptation of adversarial attack techniques to manipulate a text to deceive a classifier w. r. t one task (privacy) whilst keeping the predictions of another classifier trained for another task (utility) unchanged.

Adversarial Attack Attribute

NAP^2: A Benchmark for Naturalness and Privacy-Preserving Text Rewriting by Learning from Human

no code implementations6 Jun 2024 Shuo Huang, William MacLean, Xiaoxi Kang, Anqi Wu, Lizhen Qu, Qiongkai Xu, Zhuang Li, Xingliang Yuan, Gholamreza Haffari

Increasing concerns about privacy leakage issues in academia and industry arise when employing NLP models from third-party providers to process sensitive texts.

Privacy Preserving

Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients

1 code implementation3 Jun 2024 Weijun Li, Qiongkai Xu, Mark Dras

Recent studies have shown that distributed machine learning is vulnerable to gradient inversion attacks, where private training data can be reconstructed by analyzing the gradients of the models shared in training.

SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks

no code implementations19 May 2024 Xuanli He, Qiongkai Xu, Jun Wang, Benjamin I. P. Rubinstein, Trevor Cohn

Modern NLP models are often trained on public datasets drawn from diverse sources, rendering them vulnerable to data poisoning attacks.

Data Poisoning

Transferring Troubles: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning

no code implementations30 Apr 2024 Xuanli He, Jun Wang, Qiongkai Xu, Pasquale Minervini, Pontus Stenetorp, Benjamin I. P. Rubinstein, Trevor Cohn

The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malicious outputs.

Attacks on Third-Party APIs of Large Language Models

1 code implementation24 Apr 2024 Wanru Zhao, Vidit Khazanchi, Haodi Xing, Xuanli He, Qiongkai Xu, Nicholas Donald Lane

Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services.

Language Modelling Large Language Model

Backdoor Attack on Multilingual Machine Translation

no code implementations3 Apr 2024 Jun Wang, Qiongkai Xu, Xuanli He, Benjamin I. P. Rubinstein, Trevor Cohn

Our aim is to bring attention to these vulnerabilities within MNMT systems with the hope of encouraging the community to address security concerns in machine translation, especially in the context of low-resource languages.

Backdoor Attack Machine Translation +1

WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection

1 code implementation3 Mar 2024 Anudeex Shetty, Yue Teng, Ke He, Qiongkai Xu

Embedding as a Service (EaaS) has become a widely adopted solution, which offers feature extraction capabilities for addressing various downstream tasks in Natural Language Processing (NLP).

Model extraction

Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge

1 code implementation29 Feb 2024 Ansh Arora, Xuanli He, Maximilian Mozes, Srinibas Swain, Mark Dras, Qiongkai Xu

The democratization of pre-trained language models through open-source initiatives has rapidly advanced innovation and expanded access to cutting-edge technologies.


Generative Models are Self-Watermarked: Declaring Model Authentication through Re-Generation

no code implementations23 Feb 2024 Aditya Desu, Xuanli He, Qiongkai Xu, Wei Lu

As machine- and AI-generated content proliferates, protecting the intellectual property of generative models has become imperative, yet verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data.


Boot and Switch: Alternating Distillation for Zero-Shot Dense Retrieval

1 code implementation27 Nov 2023 Fan Jiang, Qiongkai Xu, Tom Drummond, Trevor Cohn

Experimental results demonstrate that our unsupervised $\texttt{ABEL}$ model outperforms both leading supervised and unsupervised retrievers on the BEIR benchmark.

Passage Retrieval Retrieval

Fingerprint Attack: Client De-Anonymization in Federated Learning

1 code implementation12 Sep 2023 Qiongkai Xu, Trevor Cohn, Olga Ohrimenko

Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another.

Clustering Federated Learning

G3Detector: General GPT-Generated Text Detector

no code implementations22 May 2023 Haolan Zhan, Xuanli He, Qiongkai Xu, Yuxiang Wu, Pontus Stenetorp

The burgeoning progress in the field of Large Language Models (LLMs) heralds significant benefits due to their unparalleled capacities.

Text Detection

Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation

1 code implementation19 May 2023 Xuanli He, Qiongkai Xu, Jun Wang, Benjamin Rubinstein, Trevor Cohn

Modern NLP models are often trained over large untrusted datasets, raising the potential for a malicious adversary to compromise model behaviour.

Training-free Lexical Backdoor Attacks on Language Models

1 code implementation8 Feb 2023 Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen

In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models.

Backdoor Attack Data Poisoning +1

Rethinking Round-Trip Translation for Machine Translation Evaluation

1 code implementation15 Sep 2022 Terry Yue Zhuo, Qiongkai Xu, Xuanli He, Trevor Cohn

Round-trip translation could be served as a clever and straightforward technique to alleviate the requirement of the parallel evaluation corpus.

Machine Translation Translation

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

1 code implementation27 Feb 2022 Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari

In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for task-specific natural language generation with none or a handful of task-specific labeled examples.

Data Augmentation Disentanglement +3

Protecting Intellectual Property of Language Generation APIs with Lexical Watermark

1 code implementation5 Dec 2021 Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang

Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day.

Document Summarization Image Captioning +3

Humanly Certifying Superhuman Classifiers

no code implementations16 Sep 2021 Qiongkai Xu, Christian Walder, Chenchen Xu

In this paper, we first raise the challenge of evaluating the performance of both humans and models with respect to an oracle which is unobserved.

Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!

1 code implementation NAACL 2021 Xuanli He, Lingjuan Lyu, Qiongkai Xu, Lichao Sun

Finally, we investigate two defence strategies to protect the victim model and find that unless the performance of the victim model is sacrificed, both model ex-traction and adversarial transferability can effectively compromise the target models

Model extraction text-classification +2

Privacy-Aware Text Rewriting

no code implementations WS 2019 Qiongkai Xu, Lizhen Qu, Chenchen Xu, Ran Cui

Biased decisions made by automatic systems have led to growing concerns in research communities.

Fairness Translation

ALTER: Auxiliary Text Rewriting Tool for Natural Language Generation

1 code implementation IJCNLP 2019 Qiongkai Xu, Chenchen Xu, Lizhen Qu

In this paper, we describe ALTER, an auxiliary text rewriting tool that facilitates the rewriting process for natural language generation tasks, such as paraphrasing, text simplification, fairness-aware text rewriting, and text style transfer.

Fairness Style Transfer +2

EPUTION at SemEval-2018 Task 2: Emoji Prediction with User Adaption

no code implementations SEMEVAL 2018 Liyuan Zhou, Qiongkai Xu, Hanna Suominen, Tom Gedeon

This paper describes our approach, called EPUTION, for the open trial of the SemEval- 2018 Task 2, Multilingual Emoji Prediction.

General Classification Task 2 +4

Demographic Inference on Twitter using Recursive Neural Networks

no code implementations ACL 2017 Sunghwan Mac Kim, Qiongkai Xu, Lizhen Qu, Stephen Wan, C{\'e}cile Paris

In social media, demographic inference is a critical task in order to gain a better understanding of a cohort and to facilitate interacting with one{'}s audience.

Network Embedding

Collective Vertex Classification Using Recursive Neural Network

no code implementations24 Jan 2017 Qiongkai Xu, Qing Wang, Chenchen Xu, Lizhen Qu

In this paper, we propose a graph-based recursive neural network framework for collective vertex classification.

Classification General Classification

Deep neural networks for learning graph representations

no code implementations Thirtieth AAAI Conference on Artificial Intelligence 2016 Shaosheng Cao, Wei Lu, Qiongkai Xu

Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by Perozzi et al. (2014).

Clustering Denoising +1

Cannot find the paper you are looking for? You can Submit a new open access paper.