1 code implementation • EMNLP 2020 • Qiongkai Xu, Lizhen Qu, Zeyu Gao, Gholamreza Haffari
In this work, we propose to protect personal information by warning users of detected suspicious sentences generated by conversational assistants.
no code implementations • 3 Apr 2024 • Jun Wang, Qiongkai Xu, Xuanli He, Benjamin I. P. Rubinstein, Trevor Cohn
Our aim is to bring attention to these vulnerabilities within MNMT systems with the hope of encouraging the community to address security concerns in machine translation, especially in the context of low-resource languages.
1 code implementation • 3 Mar 2024 • Anudeex Shetty, Yue Teng, Ke He, Qiongkai Xu
Embedding as a Service (EaaS) has become a widely adopted solution, which offers feature extraction capabilities for addressing various downstream tasks in Natural Language Processing (NLP).
no code implementations • 29 Feb 2024 • Ansh Arora, Xuanli He, Maximilian Mozes, Srinibas Swain, Mark Dras, Qiongkai Xu
The democratization of pre-trained language models through open-source initiatives has rapidly advanced innovation and expanded access to cutting-edge technologies.
no code implementations • 23 Feb 2024 • Aditya Desu, Xuanli He, Qiongkai Xu, Wei Lu
As machine- and AI-generated content proliferates, protecting the intellectual property of generative models has become imperative, yet verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data.
1 code implementation • 27 Nov 2023 • Fan Jiang, Qiongkai Xu, Tom Drummond, Trevor Cohn
Experimental results demonstrate that our unsupervised $\texttt{ABEL}$ model outperforms both leading supervised and unsupervised retrievers on the BEIR benchmark.
1 code implementation • 12 Sep 2023 • Qiongkai Xu, Trevor Cohn, Olga Ohrimenko
Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another.
no code implementations • 22 May 2023 • Haolan Zhan, Xuanli He, Qiongkai Xu, Yuxiang Wu, Pontus Stenetorp
The burgeoning progress in the field of Large Language Models (LLMs) heralds significant benefits due to their unparalleled capacities.
1 code implementation • 19 May 2023 • Xuanli He, Qiongkai Xu, Jun Wang, Benjamin Rubinstein, Trevor Cohn
Modern NLP models are often trained over large untrusted datasets, raising the potential for a malicious adversary to compromise model behaviour.
1 code implementation • 8 Feb 2023 • Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen
In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models.
1 code implementation • 15 Sep 2022 • Terry Yue Zhuo, Qiongkai Xu, Xuanli He, Trevor Cohn
Round-trip translation could be served as a clever and straightforward technique to alleviate the requirement of the parallel evaluation corpus.
1 code implementation • 27 Feb 2022 • Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for task-specific natural language generation with none or a handful of task-specific labeled examples.
1 code implementation • 5 Dec 2021 • Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang
Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day.
no code implementations • 16 Sep 2021 • Qiongkai Xu, Christian Walder, Chenchen Xu
In this paper, we first raise the challenge of evaluating the performance of both humans and models with respect to an oracle which is unobserved.
no code implementations • COLING 2022 • Qiongkai Xu, Xuanli He, Lingjuan Lyu, Lizhen Qu, Gholamreza Haffari
Machine-learning-as-a-service (MLaaS) has attracted millions of users to their splendid large-scale models.
1 code implementation • NAACL 2021 • Xuanli He, Lingjuan Lyu, Qiongkai Xu, Lichao Sun
Finally, we investigate two defence strategies to protect the victim model and find that unless the performance of the victim model is sacrificed, both model ex-traction and adversarial transferability can effectively compromise the target models
no code implementations • WS 2019 • Qiongkai Xu, Lizhen Qu, Chenchen Xu, Ran Cui
Biased decisions made by automatic systems have led to growing concerns in research communities.
1 code implementation • IJCNLP 2019 • Qiongkai Xu, Chenchen Xu, Lizhen Qu
In this paper, we describe ALTER, an auxiliary text rewriting tool that facilitates the rewriting process for natural language generation tasks, such as paraphrasing, text simplification, fairness-aware text rewriting, and text style transfer.
no code implementations • 13 Aug 2018 • Qiongkai Xu, Juyan Zhang, Lizhen Qu, Lexing Xie, Richard Nock
In this paper, we investigate the diversity aspect of paraphrase generation.
no code implementations • SEMEVAL 2018 • Liyuan Zhou, Qiongkai Xu, Hanna Suominen, Tom Gedeon
This paper describes our approach, called EPUTION, for the open trial of the SemEval- 2018 Task 2, Multilingual Emoji Prediction.
no code implementations • ACL 2017 • Sunghwan Mac Kim, Qiongkai Xu, Lizhen Qu, Stephen Wan, C{\'e}cile Paris
In social media, demographic inference is a critical task in order to gain a better understanding of a cohort and to facilitate interacting with one{'}s audience.
no code implementations • 24 Jan 2017 • Qiongkai Xu, Qing Wang, Chenchen Xu, Lizhen Qu
In this paper, we propose a graph-based recursive neural network framework for collective vertex classification.
no code implementations • Thirtieth AAAI Conference on Artificial Intelligence 2016 • Shaosheng Cao, Wei Lu, Qiongkai Xu
Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by Perozzi et al. (2014).
2 code implementations • WWW 2015 • Shaosheng Cao, Wei Lu, Qiongkai Xu
In this paper, we present {GraRep}, a novel model for learning vertex representations of weighted graphs.
Ranked #1 on Node Classification on 20NEWS