no code implementations • EMNLP 2021 • Haolan Zhan, Lei Shen, Hongshen Chen, Hainan Zhang
Knowledge-grounded dialogue generation has achieved promising performance with the engagement of external knowledge sources.
no code implementations • ECNLP (ACL) 2022 • Zeming Wang, Yanyan Zou, Yuejian Fang, Hongshen Chen, Mian Ma, Zhuoye Ding, Bo Long
As the multi-modal e-commerce is thriving, high-quality advertising product copywriting has gain more attentions, which plays a crucial role in the e-commerce recommender, advertising and even search platforms. The advertising product copywriting is able to enhance the user experience by highlighting the product’s characteristics with textual descriptions and thus to improve the likelihood of user click and purchase.
no code implementations • EMNLP 2021 • Haoran Xu, Hainan Zhang, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Yanyan Lan
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics.
no code implementations • NAACL 2022 • Yue Fang, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Bo Long, Yanyan Lan, Yanquan Zhou
Firstly, an utterance rewriter is conducted to complete the ellipsis content of dialogue content and then obtain the rewriting utterances.
no code implementations • 22 Oct 2021 • Haoran Xu, Hainan Zhang, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Yanyan Lan
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario.
no code implementations • Findings (EMNLP) 2021 • Xu Wang, Hainan Zhang, Shuai Zhao, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Bo Cheng, Yanyan Lan
Furthermore, the consistency signals between each candidate and the speaker's own history are considered to drive a model to prefer a candidate that is logically consistent with the speaker's history logic.
no code implementations • 14 Sep 2021 • Lei Shen, Haolan Zhan, Xin Shen, Hongshen Chen, Xiaofang Zhao, Xiaodan Zhu
The training method updates parameters of a trained NCMs on two small sets with newly maintained and removed samples, respectively.
1 code implementation • Findings (EMNLP) 2021 • Junpeng Liu, Yanyan Zou, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Caixia Yuan, Xiaojie Wang
To capture the various topic information of a conversation and outline salient facts for the captured topics, this work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives, which are expected to implicitly model the topic change and handle information scattering challenges for the dialogue summarization task.
Ranked #3 on
Text Summarization
on SAMSum Corpus
no code implementations • 26 Jun 2021 • Xu Yuan, Hongshen Chen, Yonghao Song, Xiaofang Zhao, Zhuoye Ding, Zhen He, Bo Long
In this paper, we propose a model, SSI, to improve sequential recommendation consistency with Self-Supervised Imitation.
no code implementations • NAACL 2021 • Haolan Zhan, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Yongjun Bao, Yanyan Lan
In particular, a sequential knowledge transition model equipped with a pre-trained knowledge-aware response generator (SKT-KG) formulates the high-level knowledge transition and fully utilizes the limited knowledge data.
no code implementations • 2 Mar 2021 • Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Zhuoye Ding, Yongjun Bao, Weipeng Yan, Yanyan Lan
To tackle this problem, we propose an adaptive posterior network based on Transformer architecture that can utilize user-cared information from customer reviews.
no code implementations • 16 Feb 2021 • Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Yanyan Lan, Zhuoye Ding, Dawei Yin
A simple and effective way is to extract keywords directly from the knowledge-base of products, i. e., attributes or title, as the recommendation reason.
1 code implementation • COLING 2020 • Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, Zhumin Chen
In response to this problem, we propose a multi-resolution adversarial model {--} EmpDG, to generate more empathetic responses.
no code implementations • EMNLP 2020 • Shaoxiong Feng, Xuancheng Ren, Hongshen Chen, Bin Sun, Kan Li, Xu sun
Human dialogues are scenario-based and appropriate responses generally relate to the latent context knowledge entailed by the specific scenario.
no code implementations • 27 Sep 2020 • Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin
Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Hengyi Cai, Hongshen Chen, Yonghao Song, Zhuoye Ding, Yongjun Bao, Weipeng Yan, Xiaofang Zhao
Neural dialogue response generation has gained much popularity in recent years.
no code implementations • 16 Sep 2020 • Shaoxiong Feng, Hongshen Chen, Xuancheng Ren, Zhuoye Ding, Kan Li, Xu sun
Collaborative learning has successfully applied knowledge transfer to guide a pool of small student networks towards robust local minima.
no code implementations • ACL 2020 • Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, Dawei Yin
In this paper, we propose a data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the effect of inefficient samples simultaneously.
no code implementations • 4 Mar 2020 • Shaoxiong Feng, Hongshen Chen, Kan Li, Dawei Yin
Neural conversational models learn to generate responses by taking into account the dialog history.
1 code implementation • 2 Mar 2020 • Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Yangxi Li, Dongsheng Duan, Dawei Yin
Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses.
1 code implementation • IJCNLP 2019 • Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Dawei Yin
For each conversation, the model generates parameters of the encoder-decoder by referring to the input context.
1 code implementation • 20 Nov 2019 • Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, Zhumin Chen
In response to this problem, we propose a multi-resolution adversarial model -- EmpDG, to generate more empathetic responses.
2 code implementations • 31 Aug 2018 • Xisen Jin, Wenqiang Lei, Zhaochun Ren, Hongshen Chen, Shangsong Liang, Yihong Zhao, Dawei Yin
However, the \emph{expensive nature of state labeling} and the \emph{weak interpretability} make the dialogue state tracking a challenging problem for both task-oriented and non-task-oriented dialogue generation: For generating responses in task-oriented dialogues, state tracking is usually learned from manually annotated corpora, where the human annotation is expensive for training; for generating responses in non-task-oriented dialogues, most of existing work neglects the explicit state tracking due to the unlimited number of dialogue states.
1 code implementation • ACL 2018 • Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, Dawei Yin
Our empirical study on a real-world dataset prove that our model is capable of generating meaningful, diverse and natural responses for both factoid-questions and knowledge grounded chi-chats.
no code implementations • 6 Nov 2017 • Hongshen Chen, Xiaorui Liu, Dawei Yin, Jiliang Tang
Dialogue systems have attracted more and more attention.