no code implementations • 16 Dec 2022 • Yuxi Feng, Xiaoyuan Yi, Xiting Wang, Laks V. S. Lakshmanan, Xing Xie
Augmented by only self-generated pseudo text, generation models over-emphasize exploitation of the previously learned space, suffering from a constrained generalization boundary.
1 code implementation • 14 Nov 2022 • Wenhao Li, Xiaoyuan Yi, Jinyi Hu, Maosong Sun, Xing Xie
In this work, we dig into the intrinsic mechanism of this problem and found that sparser attention values in Transformer could improve diversity.
no code implementations • 22 Oct 2022 • Jinyi Hu, Xiaoyuan Yi, Wenhao Li, Maosong Sun, Xing Xie
We demonstrate that TRACE could enhance the entanglement of each segment and preceding latent variables and deduce a non-zero lower bound of the KL term, providing a theoretical guarantee of generation diversity.
1 code implementation • 13 Oct 2022 • Seungeon Lee, Xiting Wang, Sungwon Han, Xiaoyuan Yi, Xing Xie, Meeyoung Cha
We present SELOR, a framework for integrating self-explaining capabilities into a given deep model to achieve both high prediction performance and human precision.
no code implementations • 10 Oct 2022 • Zonghan Yang, Xiaoyuan Yi, Peng Li, Yang Liu, Xing Xie
Warning: this paper contains model outputs exhibiting offensiveness and biases.
1 code implementation • NAACL 2022 • Jinyi Hu, Xiaoyuan Yi, Wenhao Li, Maosong Sun, Xing Xie
The past several years have witnessed Variational Auto-Encoder's superiority in various text generation tasks.
1 code implementation • 3 Jun 2021 • Wenhao Li, Fanchao Qi, Maosong Sun, Xiaoyuan Yi, Jiarui Zhang
We hope this dataset can further enhance the study on incorporating deep semantics into the understanding and generation system of Chinese classical poetry.
1 code implementation • NAACL 2021 • Zhenghao Liu, Xiaoyuan Yi, Maosong Sun, Liner Yang, Tat-Seng Chua
Grammatical Error Correction (GEC) aims to correct writing errors and help language learners improve their writing skills.
Ranked #1 on
Grammatical Error Detection
on FCE
no code implementations • 13 Mar 2020 • Xiaoyuan Yi, Ruoyu Li, Cheng Yang, Wenhao Li, Maosong Sun
Though recent neural models make prominent progress in some criteria of poetry quality, generated poems still suffer from the problem of poor diversity.
no code implementations • ACL 2019 • Guo Zhipeng, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, Jiannan Liang, Huimin Chen, Yuhui Zhang, Ruoyu Li
By exposing the options of poetry genres, styles and revision modes, Jiuge, acting as a professional assistant, allows constant and active participation of users in poetic creation.
no code implementations • EMNLP 2018 • Cheng Yang, Maosong Sun, Xiaoyuan Yi, Wenhao Li
The ability to write diverse poems in different styles under the same poetic imagery is an important characteristic of human poetry writing.
no code implementations • EMNLP 2018 • Xiaoyuan Yi, Maosong Sun, Ruoyu Li, Wenhao Li
Human experts evaluate poetry in terms of some specific criteria, instead of word-level likelihood.
1 code implementation • 12 Sep 2018 • Xiaoyuan Yi, Maosong Sun, Ruoyu Li, Zonghan Yang
Different from previous methods, our model explicitly maintains topics and informative limited history in a neural memory.
no code implementations • CONLL 2018 • Xiaoyuan Yi, Ruoyu Li, Maosong Sun
As a precious part of the human cultural heritage, Chinese poetry has influenced people for generations.
no code implementations • 6 Apr 2016 • Xiaoyuan Yi, Ruoyu Li, Maosong Sun
We take the generation of Chinese classical poem lines as a sequence-to-sequence learning problem, and build a novel system based on the RNN Encoder-Decoder structure to generate quatrains (Jueju in Chinese), with a topic word as input.