Search Results for author: Igor Shalyminov

Found 11 papers, 4 papers with code

Data-Efficient Methods for Dialogue Systems

no code implementations5 Dec 2020 Igor Shalyminov

In this thesis, we address the above issues by introducing a series of methods for training robust dialogue systems from minimal data.

Anomaly Detection Data Augmentation +3

Contextual Out-of-Domain Utterance Handling With Counterfeit Data Augmentation

1 code implementation24 May 2019 Sungjin Lee, Igor Shalyminov

Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience.

Data Augmentation OOD Detection

Improving Robustness of Neural Dialog Systems in a Data-Efficient Way with Turn Dropout

1 code implementation29 Nov 2018 Igor Shalyminov, Sungjin Lee

We present a new dataset for studying the robustness of dialog systems to OOD input, which is bAbI Dialog Task 6 augmented with OOD content in a controlled way.

Neural Response Ranking for Social Conversation: A Data-Efficient Approach

1 code implementation WS 2018 Igor Shalyminov, Ondřej Dušek, Oliver Lemon

Using a dataset of real conversations collected in the 2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good' system responses to user utterances, i. e. responses which are likely to lead to long and engaging conversations.

Multi-Task Learning for Domain-General Spoken Disfluency Detection in Dialogue Systems

no code implementations8 Oct 2018 Igor Shalyminov, Arash Eshghi, Oliver Lemon

To test the model's generalisation potential, we evaluate the same model on the bAbI+ dataset, without any additional training.

Multi-Task Learning

Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena

1 code implementation22 Sep 2017 Igor Shalyminov, Arash Eshghi, Oliver Lemon

Results show that the semantic accuracy of the MemN2N model drops drastically; and that although it is in principle able to learn to process the constructions in bAbI+, it needs an impractical amount of training data to do so.

Cannot find the paper you are looking for? You can Submit a new open access paper.