no code implementations • EMNLP 2021 • Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu sun, Songfang Huang, Fei Huang
Pre-trained self-supervised models such as BERT have achieved striking success in learning sequence representations, especially for natural language processing.
1 code implementation • ICLR 2022 • Pengcheng Yang, XiaoMing Zhang, Wenpeng Zhang, Ming Yang, Hong Wei
The recent trend of using large-scale deep neural networks (DNN) to boost performance has propelled the development of the parallel pipelining technique for efficient DNN training, which has resulted in the development of several prominent pipelines such as GPipe, PipeDream, and PipeDream-2BW.
no code implementations • NAACL 2021 • Pengcheng Yang, Pei Zhang, Boxing Chen, Jun Xie, Weihua Luo
Document machine translation aims to translate the source sentence into the target language in the presence of additional contextual information.
no code implementations • 13 Oct 2020 • Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu sun
Pre-trained self-supervised models such as BERT have achieved striking success in learning sequence representations, especially for natural language processing.
no code implementations • ACL (RepL4NLP) 2021 • Damai Dai, Hua Zheng, Fuli Luo, Pengcheng Yang, Baobao Chang, Zhifang Sui
Conventional Knowledge Graph Completion (KGC) assumes that all test entities appear during training.
no code implementations • 27 Dec 2019 • Pengcheng Yang, Boxing Chen, Pei Zhang, Xu sun
Further analysis demonstrates that the proposed regularized training can effectively improve the agreement of attention on the image, leading to better use of visual information.
no code implementations • IJCNLP 2019 • Pengcheng Yang, Junyang Lin, Jingjing Xu, Jun Xie, Qi Su, Xu sun
The task of unsupervised sentiment modification aims to reverse the sentiment polarity of the input text while preserving its semantic content without any parallel data.
no code implementations • IJCNLP 2019 • Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, Xu sun
We provide representative baselines for these tasks and further introduce a coarse-to-fine model for clarification question generation.
1 code implementation • IJCNLP 2019 • Fuli Luo, Shunyao Li, Pengcheng Yang, Lei LI, Baobao Chang, Zhifang Sui, Xu sun
It consists of a generator to produce pun sentences, and a discriminator to distinguish between the generated pun sentences and the real sentences with specific word senses.
1 code implementation • ACL 2019 • Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie zhou, Xu sun
We propose a novel model to separate the generation into two stages: key fact prediction and surface realization.
1 code implementation • ACL 2019 • Wenhuan Zeng, Abulikemu Abuduweili, Lei LI, Pengcheng Yang
Comments on social media are very diverse, in terms of content, style and vocabulary, which make generating comments much more challenging than other existing natural language generation~(NLG) tasks.
1 code implementation • ACL 2019 • Pengcheng Yang, Fuli Luo, Shuming Ma, Junyang Lin, Xu sun
In this way, we can reduce the dependence of the model on the label order, as well as capture high-order correlations between labels.
no code implementations • ACL 2019 • Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, Zhifang Sui
To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing.
1 code implementation • ACL 2019 • Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei LI, Chengyang Huang, Xu sun
Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms.
no code implementations • ACL 2019 • Pengcheng Yang, Lei LI, Fuli Luo, Tianyu Liu, Xu sun
Experiments show that with external commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent than existing methods in terms of both automatic and human evaluation.
no code implementations • ACL 2019 • Pengcheng Yang, Fuli Luo, Peng Chen, Tianyu Liu, Xu sun
The task of unsupervised bilingual lexicon induction (UBLI) aims to induce word translations from monolingual corpora in two languages.
no code implementations • ACL 2019 • Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, Xu sun
Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges.
1 code implementation • ACL 2019 • Fuli Luo, Peng Li, Pengcheng Yang, Jie zhou, Yutong Tan, Baobao Chang, Zhifang Sui, Xu sun
In this paper, we focus on the task of fine-grained text sentiment transfer (FGST).
no code implementations • 24 May 2019 • Zhiyuan Zhang, Pengcheng Yang, Xuancheng Ren, Qi Su, Xu sun
Neural network learning is usually time-consuming since backpropagation needs to compute full gradients and backpropagate them across multiple layers.
2 code implementations • 24 May 2019 • Fuli Luo, Peng Li, Jie zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, Xu sun
Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.
Ranked #1 on
Unsupervised Text Style Transfer
on GYAFC
1 code implementation • IJCAI 2019 2019 • Pengcheng Yang, Fuli Luo, Peng Chen, Lei LI, Zhiyi Yin, Xiaodong He, Xu sun
The visual storytelling (VST) task aims at generating a reasonable and coherent paragraph-level story with the image stream as input.
Ranked #21 on
Visual Storytelling
on VIST
no code implementations • 1 Nov 2018 • Pengcheng Yang, Fuli Luo, Shuangzhi Wu, Jingjing Xu, Dong-dong Zhang, Xu sun
In order to avoid such sophisticated alternate optimization, we propose to learn unsupervised word mapping by directly maximizing the mean discrepancy between the distribution of transferred embedding and target embedding.
no code implementations • 10 Sep 2018 • Pengcheng Yang, Shuming Ma, Yi Zhang, Junyang Lin, Qi Su, Xu sun
However, the Seq2Seq model is not suitable for the MLTC task in essence.
1 code implementation • EMNLP 2018 • Junyang Lin, Qi Su, Pengcheng Yang, Shuming Ma, Xu sun
We propose a novel model for multi-label text classification, which is based on sequence-to-sequence learning.
1 code implementation • EMNLP 2018 • Yi Zhang, Jingjing Xu, Pengcheng Yang, Xu sun
The task of sentiment modification requires reversing the sentiment of the input and preserving the sentiment-independent content.
no code implementations • 22 Aug 2018 • Deli Chen, Shuming Ma, Pengcheng Yang, Xu sun
In this work, we introduce a novel task: high-quality comment identification (HQCI), which aims to automatically assess the quality of online comments.
1 code implementation • COLING 2018 • Pengcheng Yang, Xu sun, Wei Li, Shuming Ma, Wei Wu, Houfeng Wang
Further analysis of experimental results demonstrates that the proposed methods not only capture the correlations between labels, but also select the most informative words automatically when predicting different labels.
1 code implementation • ACL 2018 • Pengcheng Yang, Xu sun, Wei Li, Shuming Ma
As more and more academic papers are being submitted to conferences and journals, evaluating all these papers by professionals is time-consuming and can cause inequality due to the personal factors of the reviewers.