Search Results for author: Lyan Verwimp

Found 12 papers, 2 papers with code

SCALE: A Scalable Language Engineering Toolkit

1 code implementation LREC 2016 Joris Pelemans, Lyan Verwimp, Kris Demuynck, Hugo Van hamme, Patrick Wambacq

In this paper we present SCALE, a new Python toolkit that contains two extensions to n-gram language models.

Language Modelling

Character-Word LSTM Language Models

no code implementations EACL 2017 Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq

We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model.

Language Modelling

Language Models of Spoken Dutch

no code implementations12 Sep 2017 Lyan Verwimp, Joris Pelemans, Marieke Lycke, Hugo Van hamme, Patrick Wambacq

One model is trained on all available data (46M word tokens), but we also trained models on a specific type of TV show or domain/topic.

Language Modelling speech-recognition +1

State Gradients for RNN Memory Analysis

no code implementations WS 2018 Lyan Verwimp, Hugo Van hamme, Vincent Renkens, Patrick Wambacq

We present a framework for analyzing what the state in RNNs remembers from its input embeddings.

Information-Weighted Neural Cache Language Models for ASR

no code implementations24 Sep 2018 Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq

Neural cache language models (LMs) extend the idea of regular cache language models by making the cache probability dependent on the similarity between the current context and the context of the words in the cache.

Reverse Transfer Learning: Can Word Embeddings Trained for Different NLP Tasks Improve Neural Language Models?

no code implementations9 Sep 2019 Lyan Verwimp, Jerome R. Bellegarda

Natural language processing (NLP) tasks tend to suffer from a paucity of suitably annotated training data, hence the recent success of transfer learning across a wide variety of them.

domain classification Language Modelling +2

Error-driven Pruning of Language Models for Virtual Assistants

no code implementations14 Feb 2021 Sashank Gondala, Lyan Verwimp, Ernest Pusateri, Manos Tsagkias, Christophe Van Gysel

We customize entropy pruning by allowing for a keep list of infrequent n-grams that require a more relaxed pruning threshold, and propose three methods to construct the keep list.

On the long-term learning ability of LSTM LMs

no code implementations16 Jun 2021 Wim Boes, Robbe Van Rompaey, Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq

We inspect the long-term learning ability of Long Short-Term Memory language models (LSTM LMs) by evaluating a contextual extension based on the Continuous Bag-of-Words (CBOW) model for both sentence- and discourse-level LSTM LMs and by analyzing its performance.

Sentence

Towards a World-English Language Model for On-Device Virtual Assistants

no code implementations27 Mar 2024 Rricha Jalota, Lyan Verwimp, Markus Nussbaum-Thom, Amr Mousa, Arturo Argueta, Youssef Oualil

Based on this insight and leveraging the design of our production models, we introduce a new architecture for World English NNLM that meets the accuracy, latency, and memory constraints of our single-dialect models.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.