no code implementations • 5 Dec 2020 • Igor Shalyminov
In this thesis, we address the above issues by introducing a series of methods for training robust dialogue systems from minimal data.
no code implementations • 3 Mar 2020 • Igor Shalyminov, Alessandro Sordoni, Adam Atkinson, Hannes Schulz
Domain adaptation has recently become a key problem in dialogue systems research.
no code implementations • IJCNLP 2019 • Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon
Our main dataset is the Stanford Multi-Domain dialogue corpus.
no code implementations • WS 2019 • Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon
Learning with minimal data is one of the key challenges in the development of practical, production-ready goal-oriented dialogue systems.
1 code implementation • 24 May 2019 • Sungjin Lee, Igor Shalyminov
Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience.
1 code implementation • 29 Nov 2018 • Igor Shalyminov, Sungjin Lee
We present a new dataset for studying the robustness of dialog systems to OOD input, which is bAbI Dialog Task 6 augmented with OOD content in a controlled way.
1 code implementation • WS 2018 • Igor Shalyminov, Ondřej Dušek, Oliver Lemon
Using a dataset of real conversations collected in the 2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good' system responses to user utterances, i. e. responses which are likely to lead to long and engaging conversations.
no code implementations • 8 Oct 2018 • Igor Shalyminov, Arash Eshghi, Oliver Lemon
To test the model's generalisation potential, we evaluate the same model on the bAbI+ dataset, without any additional training.
no code implementations • 20 Dec 2017 • Ioannis Papaioannou, Amanda Cercas Curry, Jose L. Part, Igor Shalyminov, Xinnuo Xu, Yanchao Yu, Ondřej Dušek, Verena Rieser, Oliver Lemon
Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence.
1 code implementation • 22 Sep 2017 • Igor Shalyminov, Arash Eshghi, Oliver Lemon
Results show that the semantic accuracy of the MemN2N model drops drastically; and that although it is in principle able to learn to process the constructions in bAbI+, it needs an impractical amount of training data to do so.
no code implementations • EMNLP 2017 • Arash Eshghi, Igor Shalyminov, Oliver Lemon
Our experiments show that our model can process 74% of the Facebook AI bAbI dataset even when trained on only 0. 13% of the data (5 dialogues).