no code implementations • 2 Aug 2021 • Hoang Long Nguyen, Vincent Renkens, Joris Pelemans, Srividya Pranavi Potharaju, Anil Kumar Nalamalapu, Murat Akbacak
In this paper, we attempt to bridge this gap and present a system that allows a user to correct speech recognition errors in a virtual assistant by repeating misunderstood words.
no code implementations • 16 Jun 2021 • Wim Boes, Robbe Van Rompaey, Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq
We inspect the long-term learning ability of Long Short-Term Memory language models (LSTM LMs) by evaluating a contextual extension based on the Continuous Bag-of-Words (CBOW) model for both sentence- and discourse-level LSTM LMs and by analyzing its performance.
no code implementations • 24 Sep 2018 • Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq
Neural cache language models (LMs) extend the idea of regular cache language models by making the cache probability dependent on the similarity between the current context and the context of the words in the cache.
no code implementations • 12 Sep 2017 • Lyan Verwimp, Joris Pelemans, Marieke Lycke, Hugo Van hamme, Patrick Wambacq
One model is trained on all available data (46M word tokens), but we also trained models on a specific type of TV show or domain/topic.
no code implementations • EACL 2017 • Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq
We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model.
1 code implementation • LREC 2016 • Joris Pelemans, Lyan Verwimp, Kris Demuynck, Hugo Van hamme, Patrick Wambacq
In this paper we present SCALE, a new Python toolkit that contains two extensions to n-gram language models.
no code implementations • TACL 2016 • Joris Pelemans, Noam Shazeer, Ciprian Chelba
We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus.
no code implementations • WS 2015 • V, Vincent eghinste, Tom Vanallemeersch, Frank Van Eynde, Geert Heyman, Sien Moens, Joris Pelemans, Patrick Wambacq, Iulianna Van der Lek - Ciudin, Arda Tezcan, Lieve Macken, V{\'e}ronique Hoste, Eva Geurts, Mieke Haesen
no code implementations • 3 Dec 2014 • Noam Shazeer, Joris Pelemans, Ciprian Chelba
We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation.
Ranked #23 on Language Modelling on One Billion Word
no code implementations • LREC 2014 • Joris Pelemans, Kris Demuynck, Hugo Van hamme, Patrick Wambacq
In this paper we present 3 applications in the domain of Automatic Speech Recognition for Dutch, all of which are developed using our in-house speech recognition toolkit SPRAAK.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1