no code implementations • 15 Nov 2022 • Pieter Delobelle, Thomas Winters, Bettina Berendt
To evaluate if our new model is a plug-in replacement for RobBERT, we introduce two additional criteria based on concept drift of existing tokens and alignment for novel tokens. We found that for certain language tasks this update results in a significant performance increase.
no code implementations • 28 Apr 2022 • Pieter Delobelle, Thomas Winters, Bettina Berendt
We found that the performance of the models using the shuffled versus non-shuffled datasets is similar for most tasks and that randomly merging subsequent sentences in a corpus creates models that train faster and perform better on tasks with long sequences.
1 code implementation • 21 Sep 2021 • Gillis Hermans, Thomas Winters, Luc De Raedt
Designers increasingly rely on procedural generation for automatic generation of content in various industries.
1 code implementation • 23 Jun 2021 • Thomas Winters, Giuseppe Marra, Robin Manhaeve, Luc De Raedt
Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard.
no code implementations • 26 Oct 2020 • Thomas Winters, Pieter Delobelle
Detecting if a text is humorous is a hard task to do computationally, as it usually requires linguistic and common sense insights.
1 code implementation • 9 Sep 2020 • Thomas Winters, Luc De Raedt
In this paper, we introduce a novel grammar induction algorithm for learning interpretable grammars for generative purposes, called Gitta.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Pieter Delobelle, Thomas Winters, Bettina Berendt
Training a Dutch BERT model thus has a lot of potential for a wide range of Dutch NLP tasks.
Ranked #1 on Sentiment Analysis on DBRD
1 code implementation • 19 Sep 2019 • Thomas Winters
Automatically imitating input text is a common task in natural language generation, often used to create humorous results.