no code implementations • 2 Nov 2020 • Ting-yao Hu, Ashish Shrivastava, Jen-Hao Rick Chang, Hema Koppula, Stefan Braun, Kyuyeon Hwang, Ozlem Kalinli, Oncel Tuzel
Our policy adapts the augmentation parameters based on the training loss of the data samples.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 19 Nov 2016 • Sungho Shin, Kyuyeon Hwang, Wonyong Sung
The complexity of deep neural network algorithms for hardware implementation can be lowered either by scaling the number of units or reducing the word-length of weights.
no code implementations • 30 Sep 2016 • Minjae Lee, Kyuyeon Hwang, Jinhwan Park, Sungwook Choi, Sungho Shin, Wonyong Sung
The weights are quantized to 6 bits to store all of them in the on-chip memory of an FPGA.
no code implementations • 13 Sep 2016 • Kyuyeon Hwang, Wonyong Sung
Recurrent neural network (RNN) based character-level language models (CLMs) are extremely useful for modeling out-of-vocabulary words by nature.
no code implementations • 14 Aug 2016 • Sungho Shin, Kyuyeon Hwang, Wonyong Sung
In this paper, we propose a generative knowledge transfer technique that trains an RNN based language model (student network) using text and output probabilities generated from a previously trained RNN (teacher network).
1 code implementation • 25 Jan 2016 • Kyuyeon Hwang, Wonyong Sung
The output values of the CTC-trained RNN are character-level probabilities, which are processed by beam search decoding.
no code implementations • 30 Dec 2015 • Kyuyeon Hwang, Minjae Lee, Wonyong Sung
In this paper, we propose a context-aware keyword spotting model employing a character-level recurrent neural network (RNN) for spoken term detection in continuous speech.
1 code implementation • 29 Dec 2015 • Sajid Anwar, Kyuyeon Hwang, Wonyong Sung
To decide the importance of network connections and paths, the proposed method uses a particle filtering approach.
no code implementations • 4 Dec 2015 • Sungho Shin, Kyuyeon Hwang, Wonyong Sung
Recurrent neural networks have shown excellent performance in many applications, however they require increased complexity in hardware or software based implementations.
no code implementations • 21 Nov 2015 • Kyuyeon Hwang, Wonyong Sung
Our online model achieves 20. 7% phoneme error rate (PER) on the very long input sequence that is generated by concatenating all 192 utterances in the TIMIT core test set.
no code implementations • 20 Nov 2015 • Wonyong Sung, Sungho Shin, Kyuyeon Hwang
In this work, the effects of retraining are analyzed for a feedforward deep neural network (FFDNN) and a convolutional neural network (CNN).
no code implementations • 10 Mar 2015 • Kyuyeon Hwang, Wonyong Sung
Recurrent neural networks (RNNs) have shown outstanding performance on processing sequence data.