Search Results for author: Xuanli He

Found 37 papers, 15 papers with code

SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks

no code implementations19 May 2024 Xuanli He, Qiongkai Xu, Jun Wang, Benjamin I. P. Rubinstein, Trevor Cohn

Modern NLP models are often trained on public datasets drawn from diverse sources, rendering them vulnerable to data poisoning attacks.

Data Poisoning

Transferring Troubles: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning

no code implementations30 Apr 2024 Xuanli He, Jun Wang, Qiongkai Xu, Pasquale Minervini, Pontus Stenetorp, Benjamin I. P. Rubinstein, Trevor Cohn

The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malicious outputs.

Attacks on Third-Party APIs of Large Language Models

1 code implementation24 Apr 2024 Wanru Zhao, Vidit Khazanchi, Haodi Xing, Xuanli He, Qiongkai Xu, Nicholas Donald Lane

Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services.

Language Modelling Large Language Model

Backdoor Attack on Multilingual Machine Translation

no code implementations3 Apr 2024 Jun Wang, Qiongkai Xu, Xuanli He, Benjamin I. P. Rubinstein, Trevor Cohn

Our aim is to bring attention to these vulnerabilities within MNMT systems with the hope of encouraging the community to address security concerns in machine translation, especially in the context of low-resource languages.

Backdoor Attack Machine Translation +1

Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge

1 code implementation29 Feb 2024 Ansh Arora, Xuanli He, Maximilian Mozes, Srinibas Swain, Mark Dras, Qiongkai Xu

The democratization of pre-trained language models through open-source initiatives has rapidly advanced innovation and expanded access to cutting-edge technologies.

QNLI SST-2

Generative Models are Self-Watermarked: Declaring Model Authentication through Re-Generation

no code implementations23 Feb 2024 Aditya Desu, Xuanli He, Qiongkai Xu, Wei Lu

As machine- and AI-generated content proliferates, protecting the intellectual property of generative models has become imperative, yet verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data.

Misinformation

Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities

no code implementations24 Aug 2023 Maximilian Mozes, Xuanli He, Bennett Kleinberg, Lewis D. Griffin

Spurred by the recent rapid increase in the development and distribution of large language models (LLMs) across industry and academia, much recent work has drawn attention to safety- and security-related threats and vulnerabilities of LLMs, including in the context of potentially criminal activities.

Can Knowledge Graphs Simplify Text?

1 code implementation14 Aug 2023 Anthony Colas, Haodi Ma, Xuanli He, Yang Bai, Daisy Zhe Wang

Knowledge Graph (KG)-to-Text Generation has seen recent improvements in generating fluent and informative sentences which describe a given KG.

Descriptive KG-to-Text Generation +3

IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks

1 code implementation25 May 2023 Xuanli He, Jun Wang, Benjamin Rubinstein, Trevor Cohn

Backdoor attacks are an insidious security threat against machine learning models.

G3Detector: General GPT-Generated Text Detector

no code implementations22 May 2023 Haolan Zhan, Xuanli He, Qiongkai Xu, Yuxiang Wu, Pontus Stenetorp

The burgeoning progress in the field of Large Language Models (LLMs) heralds significant benefits due to their unparalleled capacities.

Text Detection

Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation

1 code implementation19 May 2023 Xuanli He, Qiongkai Xu, Jun Wang, Benjamin Rubinstein, Trevor Cohn

Modern NLP models are often trained over large untrusted datasets, raising the potential for a malicious adversary to compromise model behaviour.

Koala: An Index for Quantifying Overlaps with Pre-training Corpora

no code implementations26 Mar 2023 Thuy-Trang Vu, Xuanli He, Gholamreza Haffari, Ehsan Shareghi

In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour.

Memorization

Rethinking Round-Trip Translation for Machine Translation Evaluation

1 code implementation15 Sep 2022 Terry Yue Zhuo, Qiongkai Xu, Xuanli He, Trevor Cohn

Round-trip translation could be served as a clever and straightforward technique to alleviate the requirement of the parallel evaluation corpus.

Machine Translation Translation

Protecting Intellectual Property of Language Generation APIs with Lexical Watermark

1 code implementation5 Dec 2021 Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang

Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day.

Document Summarization Image Captioning +3

Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning

no code implementations30 Oct 2021 Xuanli He, Iman Keivanloo, Yi Xu, Xiang He, Belinda Zeng, Santosh Rajagopalan, Trishul Chilimbi

To achieve this, we propose a novel idea, Magic Pyramid (MP), to reduce both width-wise and depth-wise computation via token pruning and early exiting for Transformer-based models, particularly BERT.

text-classification Text Classification

Generate, Annotate, and Learn: Generative Models Advance Self-Training and Knowledge Distillation

no code implementations29 Sep 2021 Xuanli He, Islam Nassar, Jamie Ryan Kiros, Gholamreza Haffari, Mohammad Norouzi

To obtain strong task-specific generative models, we either fine-tune a large language model (LLM) on inputs from specific tasks, or prompt a LLM with a few input examples to generate more unlabeled examples.

Few-Shot Learning Knowledge Distillation +2

Generalised Unsupervised Domain Adaptation of Neural Machine Translation with Cross-Lingual Data Selection

1 code implementation EMNLP 2021 Thuy-Trang Vu, Xuanli He, Dinh Phung, Gholamreza Haffari

Once the in-domain data is detected by the classifier, the NMT model is then adapted to the new domain by jointly learning translation and domain discrimination tasks.

Contrastive Learning Machine Translation +3

Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs

no code implementations23 May 2021 Chen Chen, Xuanli He, Lingjuan Lyu, Fangzhao Wu

In this work, we bridge this gap by first presenting an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model) by only querying a limited number of queries.

Attribute Inference Attack +4

Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!

1 code implementation NAACL 2021 Xuanli He, Lingjuan Lyu, Qiongkai Xu, Lichao Sun

Finally, we investigate two defence strategies to protect the victim model and find that unless the performance of the victim model is sacrificed, both model ex-traction and adversarial transferability can effectively compromise the target models

Model extraction text-classification +2

EXPLORING VULNERABILITIES OF BERT-BASED APIS

no code implementations1 Jan 2021 Xuanli He, Lingjuan Lyu, Lichao Sun, Xiaojun Chang, Jun Zhao

We then demonstrate how the extracted model can be exploited to develop effective attribute inference attack to expose sensitive information of the training data.

Attribute Inference Attack +4

Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness

2 code implementations Findings of the Association for Computational Linguistics 2020 Lingjuan Lyu, Xuanli He, Yitong Li

It has been demonstrated that hidden representation learned by a deep model can encode private information of the input, hence can be exploited to recover such information with reasonable accuracy.

Fairness

Towards Differentially Private Text Representations

no code implementations25 Jun 2020 Lingjuan Lyu, Yitong Li, Xuanli He, Tong Xiao

Most deep learning frameworks require users to pool their local data or model updates to a trusted server to train or maintain a global model.

Dynamic Programming Encoding for Subword Segmentation in Neural Machine Translation

1 code implementation ACL 2020 Xuanli He, Gholamreza Haffari, Mohammad Norouzi

This paper introduces Dynamic Programming Encoding (DPE), a new segmentation algorithm for tokenizing sentences into subword units.

Machine Translation Segmentation +1

Sequence to Sequence Mixture Model for Diverse Machine Translation

no code implementations CONLL 2018 Xuanli He, Gholamreza Haffari, Mohammad Norouzi

In this paper, we develop a novel sequence to sequence mixture (S2SMIX) model that improves both translation diversity and quality by adopting a committee of specialized translation models rather than a single translation model.

Clustering Diversity +2

Exploring Textual and Speech information in Dialogue Act Classification with Speaker Domain Adaptation

no code implementations ALTA 2018 Xuanli He, Quan Hung Tran, William Havard, Laurent Besacier, Ingrid Zukerman, Gholamreza Haffari

In spite of the recent success of Dialogue Act (DA) classification, the majority of prior works focus on text-based classification with oracle transcriptions, i. e. human transcriptions, instead of Automatic Speech Recognition (ASR)'s transcriptions.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Word Representation Models for Morphologically Rich Languages in Neural Machine Translation

no code implementations WS 2017 Ekaterina Vylomova, Trevor Cohn, Xuanli He, Gholamreza Haffari

Dealing with the complex word forms in morphologically rich languages is an open problem in language processing, and is particularly important in translation.

Hard Attention Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.