Search Results for author: Wietse de Vries

Found 8 papers, 7 papers with code

Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages

1 code implementation ACL 2022 Wietse de Vries, Martijn Wieling, Malvina Nissim

Cross-lingual transfer learning with large multilingual pre-trained models can be an effective approach for low-resource languages with no labeled training data.

Part-Of-Speech Tagging POS +3

DUMB: A Benchmark for Smart Evaluation of Dutch Models

2 code implementations22 May 2023 Wietse de Vries, Martijn Wieling, Malvina Nissim

The benchmark includes a diverse set of datasets for low-, medium- and high-resource tasks.

XLM-R

As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages

1 code implementation Findings (ACL) 2021 Wietse de Vries, Malvina Nissim

Specifically, we describe the adaptation of English GPT-2 to Italian and Dutch by retraining lexical embeddings without tuning the Transformer layers.

Neural Representations for Modeling Variation in Speech

1 code implementation25 Nov 2020 Martijn Bartelds, Wietse de Vries, Faraz Sanal, Caitlin Richter, Mark Liberman, Martijn Wieling

We show that speech representations extracted from a specific type of neural model (i. e. Transformers) lead to a better match with human perception than two earlier approaches on the basis of phonetic transcriptions and MFCC-based acoustic features.

BERTje: A Dutch BERT Model

2 code implementations19 Dec 2019 Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, Malvina Nissim

The transformer-based pre-trained language model BERT has helped to improve state-of-the-art performance on many natural language processing (NLP) tasks.

Language Modelling named-entity-recognition +5

Cannot find the paper you are looking for? You can Submit a new open access paper.