Search Results for author: Yadong Xi

Found 12 papers, 4 papers with code

Easy and Efficient Transformer: Scalable Inference Solution For Large NLP Model

no code implementations NAACL (ACL) 2022 Gongzheng li, Yadong Xi, Jingzhen Ding, Duan Wang, Ziyang Luo, Rongsheng Zhang, Bai Liu, Changjie Fan, Xiaoxi Mao, Zeng Zhao

To fill such a gap, we introduce a scalable inference solution: Easy and Efficient Transformer (EET), including a series of transformer inference optimization at the algorithm and implementation levels.

Inference Optimization

Unraveling the Mystery of Artifacts in Machine Generated Text

1 code implementation LREC 2022 Jiashu Pu, Ziyi Huang, Yadong Xi, Guandan Chen, WeiJie Chen, Rongsheng Zhang

As neural Text Generation Models (TGM) have become more and more capable of generating text indistinguishable from human-written ones, the misuse of text generation technologies can have serious ramifications.

Text Generation

Probing Simile Knowledge from Pre-trained Language Models

1 code implementation ACL 2022 WeiJie Chen, Yongzhu Chang, Rongsheng Zhang, Jiashu Pu, Guandan Chen, Le Zhang, Yadong Xi, Yijiang Chen, Chang Su

In this paper, we probe simile knowledge from PLMs to solve the SI and SG tasks in the unified framework of simile triple completion for the first time.

Language Modelling

A Frustratingly Simple Approach for End-to-End Image Captioning

no code implementations30 Jan 2022 Ziyang Luo, Yadong Xi, Rongsheng Zhang, Jing Ma

Before training the captioning models, an extra object detector is utilized to recognize the objects in the image at first.

Image Captioning Text Generation

Youling: an AI-Assisted Lyrics Creation System

no code implementations EMNLP 2020 Rongsheng Zhang, Xiaoxi Mao, Le Li, Lin Jiang, Lin Chen, Zhiwei Hu, Yadong Xi, Changjie Fan, Minlie Huang

In the lyrics generation process, \textit{Youling} supports traditional one pass full-text generation mode as well as an interactive generation mode, which allows users to select the satisfactory sentences from generated candidates conditioned on preceding context.

Text Generation

Taming Repetition in Dialogue Generation

no code implementations16 Dec 2021 Yadong Xi, Jiashu Pu, Xiaoxi Mao

The wave of pre-training language models has been continuously improving the quality of the machine-generated conversations, however, some of the generated responses still suffer from excessive repetition, sometimes repeating words from utterance, sometimes repeating words within self-generated responses, or both.

Dialogue Generation

Analyzing the Implicit Position Encoding Ability of Transformer Decoder

no code implementations29 Sep 2021 Ziyang Luo, Yadong Xi, Jing Ma, Xiaoxi Mao, Changjie Fan

A common limitation of Transformer Encoder's self-attention mechanism is that it cannot automatically capture the information of word order, so one needs to feed the explicit position encodings into the target model.

Language Modelling

KuiLeiXi: a Chinese Open-Ended Text Adventure Game

no code implementations ACL 2021 Yadong Xi, Xiaoxi Mao, Le Li, Lei Lin, Yanjiang Chen, Shuhan Yang, Xuhan Chen, Kailun Tao, Zhi Li, Gongzheng li, Lin Jiang, Siyan Liu, Zeng Zhao, Minlie Huang, Changjie Fan, Zhipeng Hu

Equipped with GPT-2 and the latest GPT-3, AI Dungeon has been seen as a famous example of the powerful text generation capabilities of large-scale pre-trained language models, and a possibility for future games.

Story Generation

Easy and Efficient Transformer : Scalable Inference Solution For large NLP model

1 code implementation26 Apr 2021 Gongzheng li, Yadong Xi, Jingzhen Ding, Duan Wang, Bai Liu, Changjie Fan, Xiaoxi Mao, Zeng Zhao

To fill such a gap, we introduce a scalable inference solution: Easy and Efficient Transformer (EET), including a series of transformer inference optimization at the algorithm and implementation levels.

Inference Optimization Text Generation

Dialogue Distillation: Open-Domain Dialogue Augmentation Using Unpaired Data

1 code implementation EMNLP 2020 Rongsheng Zhang, Yinhe Zheng, Jianzhi Shao, Xiaoxi Mao, Yadong Xi, Minlie Huang

Further, a model-level distillation process is employed to distill a teacher model trained on high-quality paired data to augmented dialogue pairs, thereby preventing dialogue models from being affected by the noise in the augmented data.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.