Search Results for author: Joris Pelemans

Found 10 papers, 1 papers with code

User-Initiated Repetition-Based Recovery in Multi-Utterance Dialogue Systems

no code implementations2 Aug 2021 Hoang Long Nguyen, Vincent Renkens, Joris Pelemans, Srividya Pranavi Potharaju, Anil Kumar Nalamalapu, Murat Akbacak

In this paper, we attempt to bridge this gap and present a system that allows a user to correct speech recognition errors in a virtual assistant by repeating misunderstood words.

speech-recognition Speech Recognition

On the long-term learning ability of LSTM LMs

no code implementations16 Jun 2021 Wim Boes, Robbe Van Rompaey, Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq

We inspect the long-term learning ability of Long Short-Term Memory language models (LSTM LMs) by evaluating a contextual extension based on the Continuous Bag-of-Words (CBOW) model for both sentence- and discourse-level LSTM LMs and by analyzing its performance.

Sentence

Information-Weighted Neural Cache Language Models for ASR

no code implementations24 Sep 2018 Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq

Neural cache language models (LMs) extend the idea of regular cache language models by making the cache probability dependent on the similarity between the current context and the context of the words in the cache.

Language Models of Spoken Dutch

no code implementations12 Sep 2017 Lyan Verwimp, Joris Pelemans, Marieke Lycke, Hugo Van hamme, Patrick Wambacq

One model is trained on all available data (46M word tokens), but we also trained models on a specific type of TV show or domain/topic.

Language Modelling speech-recognition +1

Character-Word LSTM Language Models

no code implementations EACL 2017 Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq

We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model.

Language Modelling

SCALE: A Scalable Language Engineering Toolkit

1 code implementation LREC 2016 Joris Pelemans, Lyan Verwimp, Kris Demuynck, Hugo Van hamme, Patrick Wambacq

In this paper we present SCALE, a new Python toolkit that contains two extensions to n-gram language models.

Language Modelling

Sparse Non-negative Matrix Language Modeling

no code implementations TACL 2016 Joris Pelemans, Noam Shazeer, Ciprian Chelba

We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus.

Automatic Speech Recognition (ASR) Language Modelling +1

Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation

no code implementations3 Dec 2014 Noam Shazeer, Joris Pelemans, Ciprian Chelba

We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation.

Language Modelling

Speech Recognition Web Services for Dutch

no code implementations LREC 2014 Joris Pelemans, Kris Demuynck, Hugo Van hamme, Patrick Wambacq

In this paper we present 3 applications in the domain of Automatic Speech Recognition for Dutch, all of which are developed using our in-house speech recognition toolkit SPRAAK.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.