|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
One of the keys to enable chatbots to communicate with human in a more natural way is the ability to handle long and complex user's utterances.
One of the major drawbacks of modularized task-completion dialogue systems is that each module is trained individually, which presents several challenges.
Finally, I conduct a detailed analysis of how the vanilla model performs on conversational data by comparing it to previous chatbot models and how the additional features affect the quality of the generated responses.
Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context.
To build a high-quality open-domain chatbot, we introduce the effective training process of PLATO-2 via curriculum learning.
Ranked #1 on Chatbot on 10 Monkey Species (using extra training data)
Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes.
In this paper, we introduce the use of Semantic Hashing as embedding for the task of Intent Classification and achieve state-of-the-art performance on three frequently used benchmarks.
Experimental results show that the multilingual trained models outperform the translation-pipeline and that they are on par with the monolingual models, with the advantage of having a single model across multiple languages.