1 code implementation • 17 Feb 2023 • Yan Xu, Mahdi Namazifar, Devamanyu Hazarika, Aishwarya Padmakumar, Yang Liu, Dilek Hakkani-Tür
Large pre-trained language models (PLMs) have been shown to retain implicit knowledge within their parameters.
no code implementations • 2 Feb 2023 • Nicholas Meade, Spandana Gella, Devamanyu Hazarika, Prakhar Gupta, Di Jin, Siva Reddy, Yang Liu, Dilek Hakkani-Tür
For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4. 04% more than our approach.
no code implementations • CL (ACL) 2022 • Manaal Faruqui, Dilek Hakkani-Tür
As more users across the world are interacting with dialog agents in their daily life, there is a need for better speech understanding that calls for renewed attention to the dynamics between research in automatic speech recognition (ASR) and natural language understanding (NLU).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 11 Jun 2021 • Devamanyu Hazarika, Mahdi Namazifar, Dilek Hakkani-Tür
In this work, we propose novel approaches for controlling encoder-decoder transformer-based NLG models in zero-shot.
1 code implementation • Findings (ACL) 2022 • Ayush Shrivastava, Karthik Gopalakrishnan, Yang Liu, Robinson Piramuthu, Gokhan Tür, Devi Parikh, Dilek Hakkani-Tür
Interactive robots navigating photo-realistic environments need to be trained to effectively leverage and handle the dynamic nature of dialogue in addition to the challenges underlying vision-and-language navigation (VLN).
no code implementations • 12 Nov 2020 • Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David Traum, Maxine Eskenazi, Ahmad Beirami, Eunjoon, Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, Rajen Subba
Interactive evaluation of dialog, and 4.
no code implementations • 5 Nov 2020 • Mahdi Namazifar, Alexandros Papangelis, Gokhan Tur, Dilek Hakkani-Tür
Different flavors of transfer learning have shown tremendous impact in advancing research and applications of machine learning.
no code implementations • WS 2019 • Guan-Lin Chao, Abhinav Rastogi, Semih Yavuz, Dilek Hakkani-Tür, Jindong Chen, Ian Lane
Understanding and conversing about dynamic scenes is one of the key capabilities of AI agents that navigate the environment and convey useful information to humans.
no code implementations • 5 Jul 2019 • Shachi Paul, Rahul Goel, Dilek Hakkani-Tür
In unsupervised learning experiments we achieve an F1 score of 54. 1% on system turns in human-human dialogues.
no code implementations • 1 Jul 2019 • Rahul Goel, Shachi Paul, Dilek Hakkani-Tür
In this work, we analyze the performance of these two alternative dialogue state tracking methods, and present a hybrid approach (HyST) which learns the appropriate method for each slot type.
Ranked #18 on
Multi-domain Dialogue State Tracking
on MULTIWOZ 2.0
Dialogue State Tracking
Multi-domain Dialogue State Tracking
3 code implementations • 15 Jan 2018 • Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, Larry Heck
We propose Machines Talking To Machines (M2M), a framework combining automation and crowdsourcing to rapidly bootstrap end-to-end dialogue agents for goal-oriented dialogues in arbitrary domains.