Search Results for author: Hong-Kwang J. Kuo

Found 11 papers, 1 papers with code

Towards Reducing the Need for Speech Training Data To Build Spoken Language Understanding Systems

no code implementations26 Feb 2022 Samuel Thomas, Hong-Kwang J. Kuo, Brian Kingsbury, George Saon

In this paper, we propose a novel text representation and training methodology that allows E2E SLU systems to be effectively constructed using these text resources.

Spoken Language Understanding

Integrating Text Inputs For Training and Adapting RNN Transducer ASR Models

no code implementations26 Feb 2022 Samuel Thomas, Brian Kingsbury, George Saon, Hong-Kwang J. Kuo

We observe 20-45% relative word error rate (WER) reduction in these settings with this novel LM style customization technique using only unpaired text data from the new domains.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Improving End-to-End Models for Set Prediction in Spoken Language Understanding

no code implementations28 Jan 2022 Hong-Kwang J. Kuo, Zoltan Tuske, Samuel Thomas, Brian Kingsbury, George Saon

The goal of spoken language understanding (SLU) systems is to determine the meaning of the input speech signal, unlike speech recognition which aims to produce verbatim transcripts.

Data Augmentation speech-recognition +2

Integrating Dialog History into End-to-End Spoken Language Understanding Systems

no code implementations18 Aug 2021 Jatin Ganhotra, Samuel Thomas, Hong-Kwang J. Kuo, Sachindra Joshi, George Saon, Zoltán Tüske, Brian Kingsbury

End-to-end spoken language understanding (SLU) systems that process human-human or human-computer interactions are often context independent and process each turn of a conversation independently.

Intent Recognition Spoken Language Understanding

End-to-end spoken language understanding using transformer networks and self-supervised pre-trained features

no code implementations16 Nov 2020 Edmilson Morais, Hong-Kwang J. Kuo, Samuel Thomas, Zoltan Tuske, Brian Kingsbury

Transformer networks and self-supervised pre-training have consistently delivered state-of-art results in the field of natural language processing (NLP); however, their merits in the field of spoken language understanding (SLU) still need further investigation.

Spoken Language Understanding

End-to-End Spoken Language Understanding Without Full Transcripts

no code implementations30 Sep 2020 Hong-Kwang J. Kuo, Zoltán Tüske, Samuel Thomas, Yinghui Huang, Kartik Audhkhasi, Brian Kingsbury, Gakuto Kurata, Zvi Kons, Ron Hoory, Luis Lastras

For our speech-to-entities experiments on the ATIS corpus, both the CTC and attention models showed impressive ability to skip non-entity words: there was little degradation when trained on just entities versus full transcripts.

slot-filling Slot Filling +3

The IBM 2016 English Conversational Telephone Speech Recognition System

no code implementations27 Apr 2016 George Saon, Tom Sercu, Steven Rennie, Hong-Kwang J. Kuo

We describe a collection of acoustic and language modeling techniques that lowered the word error rate of our English conversational telephone LVCSR system to a record 6. 6% on the Switchboard subset of the Hub5 2000 evaluation testset.

Language Modelling speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.