Search Results for author: Toon Calders

Found 12 papers, 2 papers with code

Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models

no code implementations NAACL 2022 Pieter Delobelle, Ewoenam Tokpo, Toon Calders, Bettina Berendt

We survey the literature on fairness metrics for pre-trained language models and experimentally evaluate compatibility, including both biases in language models and in their downstream tasks.

Attribute Fairness

How to be fair? A study of label and selection bias

no code implementations21 Mar 2024 Marco Favier, Toon Calders, Sam Pinxteren, Jonathan Meyer

Recently, Wick et al. showed, with experiments on synthetic data, that there exist situations in which bias mitigation techniques lead to more accurate models when measured on unbiased data.

Fairness Selection bias

Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics

no code implementations24 Jan 2024 Sofie Goethals, Toon Calders, David Martens

Artificial Intelligence (AI) finds widespread applications across various domains, sparking concerns about fairness in its deployment.

Fairness

Model-based Counterfactual Generator for Gender Bias Mitigation

no code implementations6 Nov 2023 Ewoenam Kwaku Tokpo, Toon Calders

Counterfactual Data Augmentation (CDA) has been one of the preferred techniques for mitigating gender bias in natural language models.

counterfactual Data Augmentation

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

1 code implementation30 Jan 2023 Ewoenam Tokpo, Pieter Delobelle, Bettina Berendt, Toon Calders

Considering that the end use of these language models is for downstream tasks like text classification, it is important to understand how these intrinsic bias mitigation strategies actually translate to fairness in downstream tasks and the extent of this.

Fairness text-classification +1

Text Style Transfer for Bias Mitigation using Masked Language Modeling

no code implementations NAACL (ACL) 2022 Ewoenam Kwaku Tokpo, Toon Calders

Our style transfer model improves on the limitations of many existing style transfer techniques such as loss of content information.

Language Modelling Masked Language Modeling +2

Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models

1 code implementation14 Dec 2021 Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, Bettina Berendt

We survey the existing literature on fairness metrics for pretrained language models and experimentally evaluate compatibility, including both biases in language models as in their downstream tasks.

Attribute Fairness

Finding Robust Itemsets Under Subsampling

no code implementations18 Feb 2019 Nikolaj Tatti, Fabian Moerchen, Toon Calders

We do this by measuring the robustness of a property of an itemset such as closedness or non-derivability.

Detecting and Explaining Drifts in Yearly Grant Applications

no code implementations15 Sep 2018 Stephen Pauwels, Toon Calders

These scores can be used to detect outlying cases and concept drift.

Extending Dynamic Bayesian Networks for Anomaly Detection in Complex Logs

no code implementations18 May 2018 Stephen Pauwels, Toon Calders

Checking various log files from different processes can be a tedious task as these logs contain lots of events, each with a (possibly large) number of attributes.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.