Search Results for author: Chengwei Su

Found 7 papers, 0 papers with code

Optimizing NLU Reranking Using Entity Resolution Signals in Multi-domain Dialog Systems

no code implementations NAACL 2021 Tong Wang, Jiangning Chen, Mohsen Malmir, Shuyan Dong, Xin He, Han Wang, Chengwei Su, Yue Liu, Yang Liu

In dialog systems, the Natural Language Understanding (NLU) component typically makes the interpretation decision (including domain, intent and slots) for an utterance before the mentioned entities are resolved.

Entity Resolution intent-classification +2

Contextual Domain Classification with Temporal Representations

no code implementations NAACL 2021 Tzu-Hsiang Lin, Yipeng Shi, Chentao Ye, Yang Fan, Weitong Ruan, Emre Barut, Wael Hamza, Chengwei Su

In commercial dialogue systems, the Spoken Language Understanding (SLU) component tends to have numerous domains thus context is needed to help resolve ambiguities.

Classification domain classification +1

Exploring Transfer Learning For End-to-End Spoken Language Understanding

no code implementations15 Dec 2020 Subendhu Rongali, Beiye Liu, Liwei Cai, Konstantine Arkoudas, Chengwei Su, Wael Hamza

Since our model can process both speech and text input sequences and learn to predict a target sequence, it also allows us to do zero-shot E2E SLU by training on only text-hypothesis data (without any speech) from a new domain.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Multi-task Learning of Spoken Language Understanding by Integrating N-Best Hypotheses with Hierarchical Attention

no code implementations COLING 2020 Mingda Li, Xinyue Liu, Weitong Ruan, Luca Soldaini, Wael Hamza, Chengwei Su

The comparison shows that our model could recover the transcription by integrating the fragmented information among hypotheses and identifying the frequent error patterns of the ASR module, and even rewrite the query for a better understanding, which reveals the characteristic of multi-task learning of broadcasting knowledge.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +6

Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses

no code implementations11 Jan 2020 Mingda Li, Weitong Ruan, Xinyue Liu, Luca Soldaini, Wael Hamza, Chengwei Su

The NLU module usually uses the first best interpretation of a given speech in downstream tasks such as domain and intent classification.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

A Re-ranker Scheme for Integrating Large Scale NLU models

no code implementations25 Sep 2018 Chengwei Su, Rahul Gupta, Shankar Ananthakrishnan, Spyros Matsoukas

An ideal re-ranker will exhibit the following two properties: (a) it should prefer the most relevant hypothesis for the given input as the top hypothesis and, (b) the interpretation scores corresponding to each hypothesis produced by the re-ranker should be calibrated.

Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.