1 code implementation • 16 Jul 2024 • Shunqi Mao, Chaoyi Zhang, Hang Su, Hwanjun Song, Igor Shalyminov, Weidong Cai
Contextualized Image Captioning (CIC) evolves traditional image captioning into a more complex domain, necessitating the ability for multimodal reasoning.
2 code implementations • 1 Jul 2024 • Hwanjun Song, Hang Su, Igor Shalyminov, Jason Cai, Saab Mansour
Automated evaluation is crucial for streamlining text summarization benchmarking and model development, given the costly and time-consuming nature of human evaluation.
1 code implementation • 8 Jun 2024 • Jason Cai, Hang Su, Monica Sunkara, Igor Shalyminov, Saab Mansour
Large Language Models (LLMs) are powerful models for generation tasks, but they may not generate good quality outputs in their first attempt.
no code implementations • 7 Mar 2024 • Yuwei Zhang, Siffi Singh, Sailik Sengupta, Igor Shalyminov, Hang Su, Hwanjun Song, Saab Mansour
The triplet task gauges the model's understanding of two semantic concepts paramount in real-world conversational systems-- negation and implicature.
1 code implementation • 6 Mar 2024 • Jianfeng He, Hang Su, Jason Cai, Igor Shalyminov, Hwanjun Song, Saab Mansour
Semi-supervised dialogue summarization (SSDS) leverages model-generated summaries to reduce reliance on human-labeled data and improve the performance of summarization models.
Abstractive Text Summarization
Natural Language Understanding
1 code implementation • 5 Mar 2024 • Hossein Aboutalebi, Hwanjun Song, Yusheng Xie, Arshit Gupta, Justin Sun, Hang Su, Igor Shalyminov, Nikolaos Pappas, Siffi Singh, Saab Mansour
Development of multimodal interactive systems is hindered by the lack of rich, multimodal (text, images) conversational data, which is needed in large quantities for LLMs.
1 code implementation • 20 Feb 2024 • Liyan Tang, Igor Shalyminov, Amy Wing-mei Wong, Jon Burnsky, Jake W. Vincent, Yu'an Yang, Siffi Singh, Song Feng, Hwanjun Song, Hang Su, Lijia Sun, Yi Zhang, Saab Mansour, Kathleen McKeown
We find that there are diverse errors and error distributions in model-generated summaries and that non-LLM based metrics can capture all error types better than LLM-based evaluators.
no code implementations • 20 Oct 2023 • Hwanjun Song, Igor Shalyminov, Hang Su, Siffi Singh, Kaisheng Yao, Saab Mansour
Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries.
no code implementations • 5 Dec 2020 • Igor Shalyminov
In this thesis, we address the above issues by introducing a series of methods for training robust dialogue systems from minimal data.
no code implementations • 3 Mar 2020 • Igor Shalyminov, Alessandro Sordoni, Adam Atkinson, Hannes Schulz
Domain adaptation has recently become a key problem in dialogue systems research.
no code implementations • IJCNLP 2019 • Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon
Our main dataset is the Stanford Multi-Domain dialogue corpus.
no code implementations • WS 2019 • Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon
Learning with minimal data is one of the key challenges in the development of practical, production-ready goal-oriented dialogue systems.
1 code implementation • 24 May 2019 • Sungjin Lee, Igor Shalyminov
Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience.
1 code implementation • 29 Nov 2018 • Igor Shalyminov, Sungjin Lee
We present a new dataset for studying the robustness of dialog systems to OOD input, which is bAbI Dialog Task 6 augmented with OOD content in a controlled way.
1 code implementation • WS 2018 • Igor Shalyminov, Ondřej Dušek, Oliver Lemon
Using a dataset of real conversations collected in the 2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good' system responses to user utterances, i. e. responses which are likely to lead to long and engaging conversations.
no code implementations • 8 Oct 2018 • Igor Shalyminov, Arash Eshghi, Oliver Lemon
To test the model's generalisation potential, we evaluate the same model on the bAbI+ dataset, without any additional training.
no code implementations • 20 Dec 2017 • Ioannis Papaioannou, Amanda Cercas Curry, Jose L. Part, Igor Shalyminov, Xinnuo Xu, Yanchao Yu, Ondřej Dušek, Verena Rieser, Oliver Lemon
Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence.
1 code implementation • 22 Sep 2017 • Igor Shalyminov, Arash Eshghi, Oliver Lemon
Results show that the semantic accuracy of the MemN2N model drops drastically; and that although it is in principle able to learn to process the constructions in bAbI+, it needs an impractical amount of training data to do so.
no code implementations • EMNLP 2017 • Arash Eshghi, Igor Shalyminov, Oliver Lemon
Our experiments show that our model can process 74% of the Facebook AI bAbI dataset even when trained on only 0. 13% of the data (5 dialogues).