Search Results for author: Loren Lugosch

Found 11 papers, 9 papers with code

Neural Offset Min-Sum Decoding

1 code implementation20 Jan 2017 Loren Lugosch, Warren J. Gross

After describing our method, we compare the performance of the two neural decoding algorithms and show that our method achieves error-correction performance within 0. 1 dB of the multiplicative approach and as much as 1 dB better than traditional belief propagation for the codes under consideration.

Deep Learning Methods for Improved Decoding of Linear Codes

2 code implementations21 Jun 2017 Eliya Nachmani, Elad Marciano, Loren Lugosch, Warren J. Gross, David Burshtein, Yair Beery

Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

Learning from the Syndrome

1 code implementation23 Oct 2018 Loren Lugosch, Warren J. Gross

In this paper, we introduce the syndrome loss, an alternative loss function for neural error-correcting decoders based on a relaxation of the syndrome.

valid

DONUT: CTC-based Query-by-Example Keyword Spotting

1 code implementation26 Nov 2018 Loren Lugosch, Samuel Myer, Vikrant Singh Tomar

Keyword spotting--or wakeword detection--is an essential feature for hands-free operation of modern voice-controlled devices.

Keyword Spotting

Speech Model Pre-training for End-to-End Spoken Language Understanding

1 code implementation7 Apr 2019 Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, Yoshua Bengio

Whereas conventional spoken language understanding (SLU) systems map speech to text, and then text to intent, end-to-end SLU systems map speech directly to intent through a single trainable model.

Ranked #15 on Spoken Language Understanding on Fluent Speech Commands (using extra training data)

Spoken Language Understanding

Using Speech Synthesis to Train End-to-End Spoken Language Understanding Models

2 code implementations21 Oct 2019 Loren Lugosch, Brett Meyer, Derek Nowrouzezahrai, Mirco Ravanelli

End-to-end models are an attractive new approach to spoken language understanding (SLU) in which the meaning of an utterance is inferred directly from the raw audio without employing the standard pipeline composed of a separately trained speech recognizer and natural language understanding module.

Data Augmentation Natural Language Understanding +2

Surprisal-Triggered Conditional Computation with Neural Networks

1 code implementation2 Jun 2020 Loren Lugosch, Derek Nowrouzezahrai, Brett H. Meyer

The surprisal of the input, measured as the negative log-likelihood of the current observation according to the autoregressive model, is used as a measure of input difficulty.

speech-recognition Speech Recognition

Timers and Such: A Practical Benchmark for Spoken Language Understanding with Numbers

2 code implementations4 Apr 2021 Loren Lugosch, Piyush Papreja, Mirco Ravanelli, Abdelwahab Heba, Titouan Parcollet

This paper introduces Timers and Such, a new open source dataset of spoken English commands for common voice control use cases involving numbers.

Ranked #4 on Spoken Language Understanding on Timers and Such (using extra training data)

Spoken Language Understanding

Pseudo-Labeling for Massively Multilingual Speech Recognition

no code implementations30 Oct 2021 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert

Semi-supervised learning through pseudo-labeling has become a staple of state-of-the-art monolingual speech recognition systems.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.