no code implementations • NAACL 2021 • Tzu-Hsiang Lin, Yipeng Shi, Chentao Ye, Yang Fan, Weitong Ruan, Emre Barut, Wael Hamza, Chengwei Su
In commercial dialogue systems, the Spoken Language Understanding (SLU) component tends to have numerous domains thus context is needed to help resolve ambiguities.
no code implementations • COLING 2020 • Mingda Li, Xinyue Liu, Weitong Ruan, Luca Soldaini, Wael Hamza, Chengwei Su
The comparison shows that our model could recover the transcription by integrating the fragmented information among hypotheses and identifying the frequent error patterns of the ASR module, and even rewrite the query for a better understanding, which reveals the characteristic of multi-task learning of broadcasting knowledge.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
no code implementations • Findings of the Association for Computational Linguistics 2020 • Luoxin Chen, Xinyue Liu, Weitong Ruan, Jianhua Lu
Adversarial training (AT) has shown strong regularization effects on deep learning algorithms by introducing small input perturbations to improve model robustness.
Ranked #3 on Chunking on CoNLL 2000 (using extra training data)
no code implementations • ACL 2020 • Luoxin Chen, Weitong Ruan, Xinyue Liu, Jianhua Lu
Virtual adversarial training (VAT) is a powerful technique to improve model robustness in both supervised and semi-supervised settings.
Ranked #7 on Chunking on CoNLL 2000
no code implementations • 11 Jan 2020 • Mingda Li, Weitong Ruan, Xinyue Liu, Luca Soldaini, Wael Hamza, Chengwei Su
The NLU module usually uses the first best interpretation of a given speech in downstream tasks such as domain and intent classification.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 22 Sep 2017 • Weitong Ruan, Eric L. Miller
Multi-task/Multi-output learning seeks to exploit correlation among tasks to enhance performance over learning or solving each task independently.