no code implementations • 20 Dec 2013 • Yann N. Dauphin, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck
We propose a novel zero-shot learning method for semantic utterance classification (SUC).
no code implementations • 12 Sep 2016 • Yun-Nung Chen, Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Gao, Li Deng
Natural language understanding (NLU) is a core component of a spoken dialogue system.
1 code implementation • WS 2017 • Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck
We compare the performance of our proposed architecture with two context models, one that uses just the previous turn context and another that encodes dialogue context in a memory network, but loses the order of utterances in the dialogue history.
Goal-Oriented Dialogue Systems Spoken Language Understanding
1 code implementation • 7 Jul 2017 • Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck
While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems.
no code implementations • 29 Nov 2017 • Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck
We show that deep RL based optimization leads to significant improvement on task success rate and reduction in dialogue length comparing to supervised training model.
1 code implementation • NAACL 2018 • Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck
To address this challenge, we propose a hybrid imitation and reinforcement learning method, with which a dialogue agent can effectively learn from its interaction with users by learning from human teaching and feedback.
no code implementations • 11 Nov 2018 • Izzeddin Gur, Dilek Hakkani-Tur, Gokhan Tur, Pararth Shah
We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal.
4 code implementations • WS 2019 • Alexandros Papangelis, Yi-Chia Wang, Piero Molino, Gokhan Tur
and their own objectives, and can only interact via natural language they generate.
no code implementations • 18 Jul 2019 • Yue Weng, Huaixiu Zheng, Franziska Bell, Gokhan Tur
Our system consists of two major components: intent detection and reply retrieval, which are very different from standard smart reply systems where the task is to directly predict a reply.
1 code implementation • WS 2019 • Lei Shu, Piero Molino, Mahdi Namazifar, Hu Xu, Bing Liu, Huaixiu Zheng, Gokhan Tur
It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot.
4 code implementations • 17 Jan 2020 • Alexandros Papangelis, Mahdi Namazifar, Chandra Khatri, Yi-Chia Wang, Piero Molino, Gokhan Tur
Plato has been designed to be easy to understand and debug and is agnostic to the underlying learning frameworks that train each component.
no code implementations • 24 Jan 2020 • Andrea Madotto, Mahdi Namazifar, Joost Huizinga, Piero Molino, Adrien Ecoffet, Huaixiu Zheng, Alexandros Papangelis, Dian Yu, Chandra Khatri, Gokhan Tur
In this work, we propose to use the exploration approach of Go-Explore for solving text-based games.
no code implementations • 28 Jan 2020 • Yue Weng, Sai Sumanth Miryala, Chandra Khatri, Runze Wang, Huaixiu Zheng, Piero Molino, Mahdi Namazifar, Alexandros Papangelis, Hugh Williams, Franziska Bell, Gokhan Tur
As a baseline approach, we trained task-specific Statistical Language Models (SLM) and fine-tuned state-of-the-art Generalized Pre-training (GPT) Language Model to re-rank the n-best ASR hypotheses, followed by a model to identify the dialog act and slots.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 16 Feb 2020 • Patrick von Platen, Fei Tao, Gokhan Tur
It can be shown that SNN outperform the baseline by relative 26. 8 % Equal Error Rate (EER).
no code implementations • 20 Mar 2020 • Fei Tao, Gokhan Tur
Speaker verification is an established yet challenging task in speech processing and a very vibrant research area.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Lei Shu, Alexandros Papangelis, Yi-Chia Wang, Gokhan Tur, Hu Xu, Zhaleh Feizollahi, Bing Liu, Piero Molino
This work introduces Focused-Variation Network (FVN), a novel model to control language generation.
no code implementations • 3 Nov 2020 • Mahdi Namazifar, Gokhan Tur, Dilek Hakkani Tür
The insertion and drop modification of the input text during training of WLM resemble the types of noise due to Automatic Speech Recognition (ASR) errors, and as a result WLMs are likely to be more robust to ASR noise.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 5 Nov 2020 • Mahdi Namazifar, Alexandros Papangelis, Gokhan Tur, Dilek Hakkani-Tür
Different flavors of transfer learning have shown tremendous impact in advancing research and applications of machine learning.
2 code implementations • 21 Nov 2020 • Weixin Liang, Feiyang Niu, Aishwarya Reganti, Govind Thattai, Gokhan Tur
We show that LRTA makes a step towards truly understanding the question while the state-of-the-art model tends to learn superficial correlations from the training data.
no code implementations • 2 Dec 2020 • Qing Ping, Feiyang Niu, Govind Thattai, Joel Chengottusseriyil, Qiaozi Gao, Aishwarya Reganti, Prashanth Rajagopal, Gokhan Tur, Dilek Hakkani-Tur, Prem Nataraja
Current conversational AI systems aim to understand a set of pre-designed requests and execute related actions, which limits them to evolve naturally and adapt based on human interactions.
no code implementations • 29 Dec 2020 • Yi-Chia Wang, Alexandros Papangelis, Runze Wang, Zhaleh Feizollahi, Gokhan Tur, Robert Kraut
The second component of the research is the construction of a conversational agent model capable of injecting social language into an agent's responses while still preserving content.
no code implementations • 9 Jan 2021 • Shane Storks, Qiaozi Gao, Govind Thattai, Gokhan Tur
Embodied instruction following is a challenging problem requiring an agent to infer a sequence of primitive actions to achieve a goal environment state from complex language and visual inputs.
no code implementations • 26 Mar 2021 • Mahdi Namazifar, John Malik, Li Erran Li, Gokhan Tur, Dilek Hakkani Tür
Masked language models have revolutionized natural language processing systems in the past few years.
1 code implementation • CVPR 2021 • Tao Tu, Qing Ping, Govind Thattai, Gokhan Tur, Prem Natarajan
Most existing work for Guesser encode the dialog history as a whole and train the Guesser models from scratch on the GuessWhat?!
no code implementations • SIGDIAL (ACL) 2021 • Alexandros Papangelis, Karthik Gopalakrishnan, Aishwarya Padmakumar, Seokhwan Kim, Gokhan Tur, Dilek Hakkani-Tur
We show an average improvement of 35% in intent detection and 21% in slot tagging over a baseline model trained from the seed data.
3 code implementations • 1 Oct 2021 • Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, Dilek Hakkani-Tur
Robots operating in human spaces must be able to engage in natural language interaction with people, both understanding and executing instructions, and using conversation to resolve ambiguity and recover from mistakes.
5 code implementations • 18 Apr 2022 • Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, Prem Natarajan
We present the MASSIVE dataset--Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation.
Ranked #1 on Slot Filling on MASSIVE
no code implementations • 15 Jun 2022 • Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakrishnan, Thomas Gueudre, Dilek Hakkani-Tur, Wael Hamza, Jonathan Hueser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, Alessandro Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith Peris, Chandana Satya Prakash, Stephen Rawls, Andy Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Harakere Sridhar, Liz Tan, Fabian Triefenbach, Pan Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, Prem Natarajan
We present results from a large-scale experiment on pretraining encoders with non-embedding parameter counts ranging from 700M to 9. 3B, their subsequent distillation into smaller models ranging from 17M-170M parameters, and their application to the Natural Language Understanding (NLU) component of a virtual assistant system.
Cross-Lingual Natural Language Inference intent-classification +5
1 code implementation • 2 Aug 2022 • Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, Prem Natarajan
In this work, we demonstrate that multilingual large-scale sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners than decoder-only models on various tasks.
Ranked #14 on Natural Language Inference on CommitmentBank