Search Results for author: Weitong Ruan

Found 6 papers, 0 papers with code

Contextual Domain Classification with Temporal Representations

no code implementations NAACL 2021 Tzu-Hsiang Lin, Yipeng Shi, Chentao Ye, Yang Fan, Weitong Ruan, Emre Barut, Wael Hamza, Chengwei Su

In commercial dialogue systems, the Spoken Language Understanding (SLU) component tends to have numerous domains thus context is needed to help resolve ambiguities.

Classification domain classification +1

Multi-task Learning of Spoken Language Understanding by Integrating N-Best Hypotheses with Hierarchical Attention

no code implementations COLING 2020 Mingda Li, Xinyue Liu, Weitong Ruan, Luca Soldaini, Wael Hamza, Chengwei Su

The comparison shows that our model could recover the transcription by integrating the fragmented information among hypotheses and identifying the frequent error patterns of the ASR module, and even rewrite the query for a better understanding, which reveals the characteristic of multi-task learning of broadcasting knowledge.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +6

Enhance Robustness of Sequence Labelling with Masked Adversarial Training

no code implementations Findings of the Association for Computational Linguistics 2020 Luoxin Chen, Xinyue Liu, Weitong Ruan, Jianhua Lu

Adversarial training (AT) has shown strong regularization effects on deep learning algorithms by introducing small input perturbations to improve model robustness.

Ranked #3 on Chunking on CoNLL 2000 (using extra training data)

Chunking named-entity-recognition +5

SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling

no code implementations ACL 2020 Luoxin Chen, Weitong Ruan, Xinyue Liu, Jianhua Lu

Virtual adversarial training (VAT) is a powerful technique to improve model robustness in both supervised and semi-supervised settings.

Chunking General Classification +6

Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses

no code implementations11 Jan 2020 Mingda Li, Weitong Ruan, Xinyue Liu, Luca Soldaini, Wael Hamza, Chengwei Su

The NLU module usually uses the first best interpretation of a given speech in downstream tasks such as domain and intent classification.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Ensemble Multi-task Gaussian Process Regression with Multiple Latent Processes

no code implementations22 Sep 2017 Weitong Ruan, Eric L. Miller

Multi-task/Multi-output learning seeks to exploit correlation among tasks to enhance performance over learning or solving each task independently.

Gaussian Processes regression

Cannot find the paper you are looking for? You can Submit a new open access paper.