Search Results for author: Xiaoxi Mao

Found 19 papers, 9 papers with code

A Pre-training Based Personalized Dialogue Generation Model with Persona-sparse Data

2 code implementations12 Nov 2019 Yinhe Zheng, Rongsheng Zhang, Xiaoxi Mao, Minlie Huang

Further, to incorporate the target persona in the decoding process and to balance its contribution, an attention routing structure is devised in the decoder to merge features extracted from the target persona and dialogue contexts using dynamically predicted weights.

Attribute Dialogue Generation +1

Dialogue Distillation: Open-Domain Dialogue Augmentation Using Unpaired Data

1 code implementation EMNLP 2020 Rongsheng Zhang, Yinhe Zheng, Jianzhi Shao, Xiaoxi Mao, Yadong Xi, Minlie Huang

Further, a model-level distillation process is employed to distill a teacher model trained on high-quality paired data to augmented dialogue pairs, thereby preventing dialogue models from being affected by the noise in the augmented data.

Data Augmentation

Stylized Dialogue Response Generation Using Stylized Unpaired Texts

1 code implementation27 Sep 2020 Yinhe Zheng, Zikai Chen, Rongsheng Zhang, Shilei Huang, Xiaoxi Mao, Minlie Huang

However, this task is far from well-explored due to the difficulties of rendering a particular style in coherent responses, especially when the target style is embedded only in unpaired texts that cannot be directly used to train the dialogue model.

Dialogue Generation Response Generation

Positional Artefacts Propagate Through Masked Language Model Embeddings

no code implementations ACL 2021 Ziyang Luo, Artur Kulmizev, Xiaoxi Mao

In this work, we demonstrate that the contextualized word vectors derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers.

Language Modelling Sentence +3

Easy and Efficient Transformer : Scalable Inference Solution For large NLP model

1 code implementation26 Apr 2021 Gongzheng li, Yadong Xi, Jingzhen Ding, Duan Wang, Bai Liu, Changjie Fan, Xiaoxi Mao, Zeng Zhao

To fill such a gap, we introduce a scalable inference solution: Easy and Efficient Transformer (EET), including a series of transformer inference optimization at the algorithm and implementation levels.

Inference Optimization Text Generation

OpenMEVA: A Benchmark for Evaluating Open-ended Story Generation Metrics

1 code implementation ACL 2021 Jian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wenbiao Ding, Xiaoxi Mao, Changjie Fan, Minlie Huang

Automatic metrics are essential for developing natural language generation (NLG) models, particularly for open-ended language generation tasks such as story generation.

Story Generation

Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence

1 code implementation ACL 2021 Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, Minlie Huang

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation.

Semantic Similarity Semantic Textual Similarity +2

KuiLeiXi: a Chinese Open-Ended Text Adventure Game

no code implementations ACL 2021 Yadong Xi, Xiaoxi Mao, Le Li, Lei Lin, Yanjiang Chen, Shuhan Yang, Xuhan Chen, Kailun Tao, Zhi Li, Gongzheng li, Lin Jiang, Siyan Liu, Zeng Zhao, Minlie Huang, Changjie Fan, Zhipeng Hu

Equipped with GPT-2 and the latest GPT-3, AI Dungeon has been seen as a famous example of the powerful text generation capabilities of large-scale pre-trained language models, and a possibility for future games.

Story Generation

LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and Generation

2 code implementations30 Aug 2021 Jian Guan, Zhuoer Feng, Yamei Chen, Ruilin He, Xiaoxi Mao, Changjie Fan, Minlie Huang

Therefore, we propose a story-centric benchmark named LOT for evaluating Chinese long text modeling, which aggregates two understanding tasks and two generation tasks.

Text Infilling

Analyzing the Implicit Position Encoding Ability of Transformer Decoder

no code implementations29 Sep 2021 Ziyang Luo, Yadong Xi, Jing Ma, Xiaoxi Mao, Changjie Fan

A common limitation of Transformer Encoder's self-attention mechanism is that it cannot automatically capture the information of word order, so one needs to feed the explicit position encodings into the target model.

Language Modelling Position

Unsupervised Domain Adaptation with Adapter

no code implementations1 Nov 2021 Rongsheng Zhang, Yinhe Zheng, Xiaoxi Mao, Minlie Huang

However, fine-tuning all the parameters of the PrLM on a small domain-specific corpus distort the learned generic knowledge, and it is also expensive to deployment a whole fine-tuned PrLM for each domain.

Unsupervised Domain Adaptation

Taming Repetition in Dialogue Generation

no code implementations16 Dec 2021 Yadong Xi, Jiashu Pu, Xiaoxi Mao

The wave of pre-training language models has been continuously improving the quality of the machine-generated conversations, however, some of the generated responses still suffer from excessive repetition, sometimes repeating words from utterance, sometimes repeating words within self-generated responses, or both.

Dialogue Generation

Dialog Intent Induction via Density-based Deep Clustering Ensemble

no code implementations18 Jan 2022 Jiashu Pu, Guandan Chen, Yongzhu Chang, Xiaoxi Mao

Existing task-oriented chatbots heavily rely on spoken language understanding (SLU) systems to determine a user's utterance's intent and other key information for fulfilling specific tasks.

Clustering Clustering Ensemble +2

Youling: an AI-Assisted Lyrics Creation System

no code implementations EMNLP 2020 Rongsheng Zhang, Xiaoxi Mao, Le Li, Lin Jiang, Lin Chen, Zhiwei Hu, Yadong Xi, Changjie Fan, Minlie Huang

In the lyrics generation process, \textit{Youling} supports traditional one pass full-text generation mode as well as an interactive generation mode, which allows users to select the satisfactory sentences from generated candidates conditioned on preceding context.

Text Generation

Easy and Efficient Transformer: Scalable Inference Solution For Large NLP Model

no code implementations NAACL (ACL) 2022 Gongzheng li, Yadong Xi, Jingzhen Ding, Duan Wang, Ziyang Luo, Rongsheng Zhang, Bai Liu, Changjie Fan, Xiaoxi Mao, Zeng Zhao

To fill such a gap, we introduce a scalable inference solution: Easy and Efficient Transformer (EET), including a series of transformer inference optimization at the algorithm and implementation levels.

Inference Optimization

QiuNiu: A Chinese Lyrics Generation System with Passage-Level Input

no code implementations ACL 2022 Le Zhang, Rongsheng Zhang, Xiaoxi Mao, Yongzhu Chang

In this paper, we demonstrate the QiuNiu, a Chinese lyrics generation system which is conditioned on passage-level text rather than a few attributes or keywords.

Text Generation Unsupervised Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.