Domain adaptation has recently become a key problem in dialogue systems research.
Learning with minimal data is one of the key challenges in the development of practical, production-ready goal-oriented dialogue systems.
Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience.
We present a new dataset for studying the robustness of dialog systems to OOD input, which is bAbI Dialog Task 6 augmented with OOD content in a controlled way.
Using a dataset of real conversations collected in the 2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good' system responses to user utterances, i. e. responses which are likely to lead to long and engaging conversations.
To test the model's generalisation potential, we evaluate the same model on the bAbI+ dataset, without any additional training.
Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence.
Results show that the semantic accuracy of the MemN2N model drops drastically; and that although it is in principle able to learn to process the constructions in bAbI+, it needs an impractical amount of training data to do so.
Our experiments show that our model can process 74% of the Facebook AI bAbI dataset even when trained on only 0. 13% of the data (5 dialogues).