Using Speech Synthesis to Train End-to-End Spoken Language Understanding Models

21 Oct 2019  ·  Loren Lugosch, Brett Meyer, Derek Nowrouzezahrai, Mirco Ravanelli ·

End-to-end models are an attractive new approach to spoken language understanding (SLU) in which the meaning of an utterance is inferred directly from the raw audio without employing the standard pipeline composed of a separately trained speech recognizer and natural language understanding module. The downside of end-to-end SLU is that in-domain speech data must be recorded to train the model. In this paper, we propose a strategy for overcoming this requirement in which speech synthesis is used to generate a large synthetic training dataset from several artificial speakers. Experiments on two open-source SLU datasets confirm the effectiveness of our approach, both as a sole source of training data and as a form of data augmentation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Spoken Language Understanding Snips-SmartLights Real + synthetic Accuracy (%) 71.4 # 7

Methods


No methods listed for this paper. Add relevant methods here