no code implementations • 4 Apr 2024 • Jooyoung Lee, Fan Yang, Thanh Tran, Qian Hu, Emre Barut, Kai-Wei Chang, Chengwei Su
The Frozen large LM is then prompted to predict a task output based on the rationale generated by the lightweight LM.
no code implementations • NAACL 2021 • Tong Wang, Jiangning Chen, Mohsen Malmir, Shuyan Dong, Xin He, Han Wang, Chengwei Su, Yue Liu, Yang Liu
In dialog systems, the Natural Language Understanding (NLU) component typically makes the interpretation decision (including domain, intent and slots) for an utterance before the mentioned entities are resolved.
no code implementations • NAACL 2021 • Tzu-Hsiang Lin, Yipeng Shi, Chentao Ye, Yang Fan, Weitong Ruan, Emre Barut, Wael Hamza, Chengwei Su
In commercial dialogue systems, the Spoken Language Understanding (SLU) component tends to have numerous domains thus context is needed to help resolve ambiguities.
no code implementations • 15 Dec 2020 • Subendhu Rongali, Beiye Liu, Liwei Cai, Konstantine Arkoudas, Chengwei Su, Wael Hamza
Since our model can process both speech and text input sequences and learn to predict a target sequence, it also allows us to do zero-shot E2E SLU by training on only text-hypothesis data (without any speech) from a new domain.
Ranked #3 on Spoken Language Understanding on Snips-SmartLights
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • COLING 2020 • Mingda Li, Xinyue Liu, Weitong Ruan, Luca Soldaini, Wael Hamza, Chengwei Su
The comparison shows that our model could recover the transcription by integrating the fragmented information among hypotheses and identifying the frequent error patterns of the ASR module, and even rewrite the query for a better understanding, which reveals the characteristic of multi-task learning of broadcasting knowledge.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
no code implementations • 11 Jan 2020 • Mingda Li, Weitong Ruan, Xinyue Liu, Luca Soldaini, Wael Hamza, Chengwei Su
The NLU module usually uses the first best interpretation of a given speech in downstream tasks such as domain and intent classification.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 25 Sep 2018 • Chengwei Su, Rahul Gupta, Shankar Ananthakrishnan, Spyros Matsoukas
An ideal re-ranker will exhibit the following two properties: (a) it should prefer the most relevant hypothesis for the given input as the top hypothesis and, (b) the interpretation scores corresponding to each hypothesis produced by the re-ranker should be calibrated.