no code implementations • EMNLP (NLP4ConvAI) 2021 • Pei Zhou, Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
We further investigate can such models identify when to generate implicit background knowledge and when it is not necessary.
no code implementations • INLG (ACL) 2020 • Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, Dilek Hakkani-Tur
Open-domain dialog systems aim to generate relevant, informative and engaging responses.
1 code implementation • 20 May 2023 • Chao Zhao, Spandana Gella, Seokhwan Kim, Di Jin, Devamanyu Hazarika, Alexandros Papangelis, Behnam Hedayatnia, Mahdi Namazifar, Yang Liu, Dilek Hakkani-Tur
We hope this task and dataset can promote further research on TOD and subjective content understanding.
no code implementations • 10 Feb 2023 • Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Sungjin Lee, Devamanyu Hazarika, Mahdi Namazifar, Di Jin, Yang Liu, Dilek Hakkani-Tur
This work focuses on in-context data augmentation for intent detection.
Ranked #1 on
Intent Detection
on HWU64 5-shot
1 code implementation • 7 Feb 2023 • Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Seokhwan Kim, Andy Rosenbaum, Yang Liu, Zhou Yu, Dilek Hakkani-Tur
Collecting high quality conversational data can be very expensive for most applications and infeasible for others due to privacy, ethical, or similar concerns.
no code implementations • 25 Oct 2022 • Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Andy Rosenbaum, Seokhwan Kim, Yang Liu, Zhou Yu, Dilek Hakkani-Tur
Dialogue understanding tasks often necessitate abundant annotated data to achieve good performance and that presents challenges in low-resource settings.
no code implementations • SIGDIAL (ACL) 2022 • Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Dilek Hakkani-Tur
Specifically, we show that for open-domain conversations with 10\% of seed data, our approach performs close to the baseline that uses 100% of the data, while for knowledge-grounded conversations, it achieves the same using only 1% of the data, on human ratings of engagingness, fluency, and relevance.
no code implementations • 22 Mar 2022 • Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
In many real-world settings, machine learning models need to identify user inputs that are out-of-domain (OOD) so as to avoid performing wrong actions.
no code implementations • ACL 2022 • Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
Implicit knowledge, such as common sense, is key to fluid human conversations.
no code implementations • 15 Oct 2021 • Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Dilek Hakkani-Tur
Rich, open-domain textual data available on the web resulted in great advancements for language processing.
1 code implementation • 11 Oct 2021 • Sashank Santhanam, Behnam Hedayatnia, Spandana Gella, Aishwarya Padmakumar, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
We demonstrate the benefit of our Conv-FEVER dataset by showing that the models trained on this data perform reasonably well to detect factually inconsistent responses with respect to the provided knowledge through evaluation on our human annotated data.
1 code implementation • 28 Sep 2021 • Seokhwan Kim, Yang Liu, Di Jin, Alexandros Papangelis, Karthik Gopalakrishnan, Behnam Hedayatnia, Dilek Hakkani-Tur
Most prior work in dialogue modeling has been on written conversations mostly because of existing data sets.
1 code implementation • EMNLP (NLP4ConvAI) 2021 • Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
Most prior work on task-oriented dialogue systems is restricted to supporting domain APIs.
1 code implementation • SIGDIAL (ACL) 2021 • Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
Moreover, existing dialogue datasets do not explicitly focus on exhibiting commonsense as a facet.
no code implementations • ACL (dialdoc) 2021 • Di Jin, Seokhwan Kim, Dilek Hakkani-Tur
Most prior work on task-oriented dialogue systems are restricted to limited coverage of domain APIs.
no code implementations • SIGDIAL (ACL) 2021 • Alexandros Papangelis, Karthik Gopalakrishnan, Aishwarya Padmakumar, Seokhwan Kim, Gokhan Tur, Dilek Hakkani-Tur
We show an average improvement of 35% in intent detection and 21% in slot tagging over a baseline model trained from the seed data.
1 code implementation • 22 Jan 2021 • Seokhwan Kim, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tur
This challenge track aims to expand the coverage of task-oriented dialogue systems by incorporating external unstructured knowledge sources.
no code implementations • 12 Nov 2020 • Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David Traum, Maxine Eskenazi, Ahmad Beirami, Eunjoon, Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, Rajen Subba
Interactive evaluation of dialog, and 4.
no code implementations • 2 Aug 2020 • Wentian Zhao, Seokhwan Kim, Ning Xu, Hailin Jin
This paper presents a new video question answering task on screencast tutorials.
2 code implementations • SIGDIAL (ACL) 2020 • Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, Dilek Hakkani-Tur
In this paper, we propose to expand coverage of task-oriented dialogue systems by incorporating external unstructured knowledge sources.
no code implementations • 26 May 2020 • Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, Dilek Hakkani-Tur
In this paper, we propose using a dialogue policy to plan the content and style of target responses in the form of an action plan, which includes knowledge sentences related to the dialogue context, targeted dialogue acts, topic information, etc.
2 code implementations • LREC 2020 • Anthony Colas, Seokhwan Kim, Franck Dernoncourt, Siddhesh Gupte, Daisy Zhe Wang, Doo Soon Kim
The results indicate that the task is challenging and call for the investigation of new algorithms.
no code implementations • 2 Dec 2019 • Ta-Chung Chi, Mihail Eric, Seokhwan Kim, Minmin Shen, Dilek Hakkani-Tur
We demonstrate the proposed strategy is substantially more realistic and data-efficient compared to previously proposed pre-exploration techniques.
no code implementations • 14 Nov 2019 • Seokhwan Kim, Michel Galley, Chulaka Gunasekara, Sungjin Lee, Adam Atkinson, Baolin Peng, Hannes Schulz, Jianfeng Gao, Jinchao Li, Mahmoud Adada, Minlie Huang, Luis Lastras, Jonathan K. Kummerfeld, Walter S. Lasecki, Chiori Hori, Anoop Cherian, Tim K. Marks, Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta
This paper introduces the Eighth Dialog System Technology Challenge.
no code implementations • WS 2019 • Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu
While recent work in abstractive summarization has resulted in higher scores in automatic metrics, there is little understanding on how these systems combine information taken from multiple document sentences.
1 code implementation • ACL 2019 • Amirreza Shirani, Franck Dernoncourt, Paul Asente, Nedim Lipka, Seokhwan Kim, Jose Echevarria, Thamar Solorio
In visual communication, text emphasis is used to increase the comprehension of written text to convey the author{'}s intent.
3 code implementations • ACL 2019 • Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu
There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs.
2 code implementations • 27 Jul 2018 • Seonghan Ryu, Seokhwan Kim, Junhwi Choi, Hwanjo Yu, Gary Geunbae Lee
Then we used domain-category analysis as an auxiliary task to train neural sentence embedding for OOD sentence detection.
2 code implementations • NAACL 2018 • Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, Nazli Goharian
Neural abstractive summarization models have led to promising results in summarizing relatively short documents.
Ranked #4 on
Unsupervised Extractive Summarization
on Pubmed
Abstractive Text Summarization
Unsupervised Extractive Summarization
1 code implementation • 17 Jun 2017 • Zhe Wang, Kingsley Kuan, Mathieu Ravaut, Gaurav Manek, Sibo Song, Yuan Fang, Seokhwan Kim, Nancy Chen, Luis Fernando D'Haro, Luu Anh Tuan, Hongyuan Zhu, Zeng Zeng, Ngai Man Cheung, Georgios Piliouras, Jie Lin, Vijay Chandrasekhar
Beyond that, we extend the original competition by including text information in the classification, making this a truly multi-modal approach with vision, audio and text.