no code implementations • ACL 2022 • Shachi H Kumar, Hsuan Su, Ramesh Manuvinakurike, Maximilian C. Pinaroc, Sai Prasad, Saurav Sahay, Lama Nachman
Intelligent conversational assistants have become an integral part of our lives for performing simple tasks.
no code implementations • SLPAT (ACL) 2022 • Shachi H. Kumar, Hsuan Su, Ramesh Manuvinakurike, Max Pinaroc, Sai Prasad, Saurav Sahay, Lama Nachman
Conversational assistants are ubiquitous among the general population, however, these systems have not had an impact on people with disabilities, or speech and language disorders, for whom basic day-to-day communication and social interaction is a huge struggle.
no code implementations • 11 Nov 2023 • Hsuan Su, Rebecca Qian, Chinnadhurai Sankar, Shahin Shayandeh, Shang-Tse Chen, Hung-Yi Lee, Daniel M. Bikel
In this paper, we propose a diagnosis method to attribute bias to each component of a TOD system.
no code implementations • 17 Oct 2023 • Hsuan Su, Cheng-Chu Cheng, Hua Farn, Shachi H Kumar, Saurav Sahay, Shang-Tse Chen, Hung-Yi Lee
Recently, researchers have made considerable improvements in dialogue systems with the progress of large language models (LLMs) such as ChatGPT and GPT-4.
no code implementations • 18 Sep 2023 • Hsuan Su, Ting-yao Hu, Hema Swetha Koppula, Raviteja Vemulapalli, Jen-Hao Rick Chang, Karren Yang, Gautam Varma Mantena, Oncel Tuzel
In this paper, we propose a new strategy for adapting ASR models to new target domains without any text or speech from those domains.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 12 Feb 2023 • Hsuan Su, Shachi H Kumar, Sahisnu Mazumder, Wenda Chen, Ramesh Manuvinakurike, Eda Okur, Saurav Sahay, Lama Nachman, Shang-Tse Chen, Hung-Yi Lee
With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems.
no code implementations • 8 Jun 2022 • Hsuan Su, PoHan Chi, Shih-Cheng Huang, Chung Ho Lam, Saurav Sahay, Shang-Tse Chen, Hung-Yi Lee
Much literature has shown that prompt-based learning is an efficient method to make use of the large pre-trained language model.
no code implementations • 4 Dec 2021 • Shachi H Kumar, Hsuan Su, Ramesh Manuvinakurike, Saurav Sahay, Lama Nachman
We build models that can suggest relevant cues in the dialog response context which is used to control response generation and can speed up communication.
no code implementations • NAACL 2021 • Hsuan Su, Jiun-Hao Jhan, Fan-Yun Sun, Saurav Sahay, Hung-Yi Lee
Our framework includes a guiding chatbot and an interlocutor model that plays the role of humans.