|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present a general approach towards controllable societal biases in natural language generation (NLG).
Despite the continuing efforts to improve the engagingness and consistency of chit-chat dialogue systems, the majority of current work simply focus on mimicking human-like responses, leaving understudied the aspects of modeling understanding between interlocutors.
SOTA for Dialogue Generation on Persona-Chat
A weighted joint prediction paradigm for both context and response is designed to evaluate the performance of models with or without the loss term for context prediction.
Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses.
For each conversation, the model generates parameters of the encoder-decoder by referring to the input context.
In this paper, we propose a novel knowledge-aware dialogue generation model (called TransDG), which transfers question representation and knowledge matching abilities from knowledge base question answering (KBQA) task to facilitate the utterance understanding and factual knowledge selection for dialogue generation.
Then, the self-attention mechanism is utilized to update both the context and masked response representation.
Multiple sequence to sequence models were used to establish an end-to-end multi-turns proactive dialogue generation agent, with the aid of data augmentation techniques and variant encoder-decoder structure designs.
Generating responses that are consistent with the dialogue context is one of the central challenges in building engaging conversational agents.