Speech Model Pre-training for End-to-End Spoken Language Understanding

7 Apr 2019  ·  Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, Yoshua Bengio ·

Whereas conventional spoken language understanding (SLU) systems map speech to text, and then text to intent, end-to-end SLU systems map speech directly to intent through a single trainable model. Achieving high accuracy with these end-to-end models without a large amount of training data is difficult. We propose a method to reduce the data requirements of end-to-end SLU in which the model is first pre-trained to predict words and phonemes, thus learning good features for SLU. We introduce a new SLU dataset, Fluent Speech Commands, and show that our method improves performance both when the full dataset is used for training and when only a small subset is used. We also describe preliminary experiments to gauge the model's ability to generalize to new phrases not heard during training.

PDF Abstract

Datasets


Introduced in the Paper:

Fluent Speech Commands

Used in the Paper:

LibriSpeech

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Spoken Language Understanding Fluent Speech Commands Pooling classifier pre-trained using force-aligned phoneme and word labels on LibriSpeech Accuracy (%) 98.8 # 15

Methods


No methods listed for this paper. Add relevant methods here