Search Results for author: Tobias Lüken

Found 1 papers, 0 papers with code

Sustainable Modular Debiasing of Language Models

no code implementations Findings (EMNLP) 2021 Anne Lauscher, Tobias Lüken, Goran Glavaš

Unfair stereotypical biases (e. g., gender, racial, or religious biases) encoded in modern pretrained language models (PLMs) have negative ethical implications for widespread adoption of state-of-the-art language technology.

Fairness Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.