1 code implementation • EMNLP 2020 • Qiongkai Xu, Lizhen Qu, Zeyu Gao, Gholamreza Haffari
In this work, we propose to protect personal information by warning users of detected suspicious sentences generated by conversational assistants.
no code implementations • 22 May 2023 • Haolan Zhan, Xuanli He, Qiongkai Xu, Yuxiang Wu, Pontus Stenetorp
The burgeoning progress in the field of Large Language Models (LLMs) heralds significant benefits due to their unparalleled capacities.
no code implementations • 19 May 2023 • Xuanli He, Qiongkai Xu, Jun Wang, Benjamin Rubinstein, Trevor Cohn
Modern NLP models are often trained over large untrusted datasets, raising the potential for a malicious adversary to compromise model behaviour.
1 code implementation • 8 Feb 2023 • Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen
In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models.
1 code implementation • 15 Sep 2022 • Terry Yue Zhuo, Qiongkai Xu, Xuanli He, Trevor Cohn
Round-trip translation could be served as a clever and straightforward technique to alleviate the requirement of the parallel evaluation corpus.
1 code implementation • 27 Feb 2022 • Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for task-specific natural language generation with none or a handful of task-specific labeled examples.
1 code implementation • 5 Dec 2021 • Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang
Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day.
no code implementations • 16 Sep 2021 • Qiongkai Xu, Christian Walder, Chenchen Xu
In this paper, we first raise the challenge of evaluating the performance of both humans and models with respect to an oracle which is unobserved.
no code implementations • COLING 2022 • Qiongkai Xu, Xuanli He, Lingjuan Lyu, Lizhen Qu, Gholamreza Haffari
Machine-learning-as-a-service (MLaaS) has attracted millions of users to their splendid large-scale models.
1 code implementation • NAACL 2021 • Xuanli He, Lingjuan Lyu, Qiongkai Xu, Lichao Sun
Finally, we investigate two defence strategies to protect the victim model and find that unless the performance of the victim model is sacrificed, both model ex-traction and adversarial transferability can effectively compromise the target models
no code implementations • WS 2019 • Qiongkai Xu, Lizhen Qu, Chenchen Xu, Ran Cui
Biased decisions made by automatic systems have led to growing concerns in research communities.
1 code implementation • IJCNLP 2019 • Qiongkai Xu, Chenchen Xu, Lizhen Qu
In this paper, we describe ALTER, an auxiliary text rewriting tool that facilitates the rewriting process for natural language generation tasks, such as paraphrasing, text simplification, fairness-aware text rewriting, and text style transfer.
no code implementations • 13 Aug 2018 • Qiongkai Xu, Juyan Zhang, Lizhen Qu, Lexing Xie, Richard Nock
In this paper, we investigate the diversity aspect of paraphrase generation.
no code implementations • SEMEVAL 2018 • Liyuan Zhou, Qiongkai Xu, Hanna Suominen, Tom Gedeon
This paper describes our approach, called EPUTION, for the open trial of the SemEval- 2018 Task 2, Multilingual Emoji Prediction.
no code implementations • ACL 2017 • Sunghwan Mac Kim, Qiongkai Xu, Lizhen Qu, Stephen Wan, C{\'e}cile Paris
In social media, demographic inference is a critical task in order to gain a better understanding of a cohort and to facilitate interacting with one{'}s audience.
no code implementations • 24 Jan 2017 • Qiongkai Xu, Qing Wang, Chenchen Xu, Lizhen Qu
In this paper, we propose a graph-based recursive neural network framework for collective vertex classification.
no code implementations • Thirtieth AAAI Conference on Artificial Intelligence 2016 • Shaosheng Cao, Wei Lu, Qiongkai Xu
Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by Perozzi et al. (2014).
2 code implementations • WWW 2015 • Shaosheng Cao, Wei Lu, Qiongkai Xu
In this paper, we present {GraRep}, a novel model for learning vertex representations of weighted graphs.
Ranked #1 on
Node Classification
on 20NEWS