no code implementations • NAACL 2022 • Pieter Delobelle, Ewoenam Tokpo, Toon Calders, Bettina Berendt
We survey the literature on fairness metrics for pre-trained language models and experimentally evaluate compatibility, including both biases in language models and in their downstream tasks.
no code implementations • 21 Mar 2024 • Marco Favier, Toon Calders, Sam Pinxteren, Jonathan Meyer
Recently, Wick et al. showed, with experiments on synthetic data, that there exist situations in which bias mitigation techniques lead to more accurate models when measured on unbiased data.
no code implementations • 24 Jan 2024 • Sofie Goethals, Toon Calders, David Martens
Artificial Intelligence (AI) finds widespread applications across various domains, sparking concerns about fairness in its deployment.
no code implementations • 6 Nov 2023 • Ewoenam Kwaku Tokpo, Toon Calders
Counterfactual Data Augmentation (CDA) has been one of the preferred techniques for mitigating gender bias in natural language models.
1 code implementation • 30 Jan 2023 • Ewoenam Tokpo, Pieter Delobelle, Bettina Berendt, Toon Calders
Considering that the end use of these language models is for downstream tasks like text classification, it is important to understand how these intrinsic bias mitigation strategies actually translate to fairness in downstream tasks and the extent of this.
no code implementations • NAACL (ACL) 2022 • Ewoenam Kwaku Tokpo, Toon Calders
Our style transfer model improves on the limitations of many existing style transfer techniques such as loss of content information.
1 code implementation • 14 Dec 2021 • Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, Bettina Berendt
We survey the existing literature on fairness metrics for pretrained language models and experimentally evaluate compatibility, including both biases in language models as in their downstream tasks.
no code implementations • LREC 2020 • Hafiz Hassaan Saeed, Toon Calders, Faisal Kamiran
In this paper, we describe our submission for the OCAST4 2020 shared tasks on offensive language and hate speech detection in the Arabic language.
no code implementations • 18 Feb 2019 • Nikolaj Tatti, Fabian Moerchen, Toon Calders
We do this by measuring the robustness of a property of an itemset such as closedness or non-derivability.
no code implementations • 15 Sep 2018 • Stephen Pauwels, Toon Calders
These scores can be used to detect outlying cases and concept drift.
no code implementations • 18 May 2018 • Stephen Pauwels, Toon Calders
Checking various log files from different processes can be a tedious task as these logs contain lots of events, each with a (possibly large) number of attributes.