Search Results for author: Igor Shalyminov

Found 16 papers, 6 papers with code

Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders

no code implementations7 Mar 2024 Yuwei Zhang, Siffi Singh, Sailik Sengupta, Igor Shalyminov, Hang Su, Hwanjun Song, Saab Mansour

The triplet task gauges the model's understanding of two semantic concepts paramount in real-world conversational systems-- negation and implicature.

Clustering intent-classification +2

Semi-Supervised Dialogue Abstractive Summarization via High-Quality Pseudolabel Selection

1 code implementation6 Mar 2024 Jianfeng He, Hang Su, Jason Cai, Igor Shalyminov, Hwanjun Song, Saab Mansour

Semi-supervised dialogue summarization (SSDS) leverages model-generated summaries to reduce reliance on human-labeled data and improve the performance of summarization models.

Abstractive Text Summarization Natural Language Understanding

MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets

no code implementations5 Mar 2024 Hossein Aboutalebi, Hwanjun Song, Yusheng Xie, Arshit Gupta, Justin Sun, Hang Su, Igor Shalyminov, Nikolaos Pappas, Siffi Singh, Saab Mansour

Development of multimodal interactive systems is hindered by the lack of rich, multimodal (text, images) conversational data, which is needed in large quantities for LLMs.

Image-text matching Retrieval +1

TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization

1 code implementation20 Feb 2024 Liyan Tang, Igor Shalyminov, Amy Wing-mei Wong, Jon Burnsky, Jake W. Vincent, Yu'an Yang, Siffi Singh, Song Feng, Hwanjun Song, Hang Su, Lijia Sun, Yi Zhang, Saab Mansour, Kathleen McKeown

We find that there are diverse errors and error distributions in model-generated summaries and that non-LLM based metrics can capture all error types better than LLM-based evaluators.

Hallucination News Summarization +2

Enhancing Abstractiveness of Summarization Models through Calibrated Distillation

no code implementations20 Oct 2023 Hwanjun Song, Igor Shalyminov, Hang Su, Siffi Singh, Kaisheng Yao, Saab Mansour

Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries.

Abstractive Text Summarization Informativeness +1

Data-Efficient Methods for Dialogue Systems

no code implementations5 Dec 2020 Igor Shalyminov

In this thesis, we address the above issues by introducing a series of methods for training robust dialogue systems from minimal data.

Anomaly Detection Data Augmentation +4

Contextual Out-of-Domain Utterance Handling With Counterfeit Data Augmentation

1 code implementation24 May 2019 Sungjin Lee, Igor Shalyminov

Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience.

Data Augmentation Out of Distribution (OOD) Detection

Improving Robustness of Neural Dialog Systems in a Data-Efficient Way with Turn Dropout

1 code implementation29 Nov 2018 Igor Shalyminov, Sungjin Lee

We present a new dataset for studying the robustness of dialog systems to OOD input, which is bAbI Dialog Task 6 augmented with OOD content in a controlled way.

Neural Response Ranking for Social Conversation: A Data-Efficient Approach

1 code implementation WS 2018 Igor Shalyminov, Ondřej Dušek, Oliver Lemon

Using a dataset of real conversations collected in the 2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good' system responses to user utterances, i. e. responses which are likely to lead to long and engaging conversations.

Multi-Task Learning for Domain-General Spoken Disfluency Detection in Dialogue Systems

no code implementations8 Oct 2018 Igor Shalyminov, Arash Eshghi, Oliver Lemon

To test the model's generalisation potential, we evaluate the same model on the bAbI+ dataset, without any additional training.

Multi-Task Learning

Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena

1 code implementation22 Sep 2017 Igor Shalyminov, Arash Eshghi, Oliver Lemon

Results show that the semantic accuracy of the MemN2N model drops drastically; and that although it is in principle able to learn to process the constructions in bAbI+, it needs an impractical amount of training data to do so.

Retrieval Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.