no code implementations • GeBNLP (COLING) 2020 • Hannah Devinney, Jenny Björklund, Henrik Björklund
Gender bias has been identified in many models for Natural Language Processing, stemming from implicit biases in the text corpora used to train the models.
no code implementations • 19 Apr 2023 • Andrea Aler Tubella, Dimitri Coelho Mollo, Adam Dahlgren Lindström, Hannah Devinney, Virginia Dignum, Petter Ericson, Anna Jonsson, Timotheus Kampik, Tom Lenaerts, Julian Alfredo Mendez, Juan Carlos Nieves
Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available.
no code implementations • 5 May 2022 • Hannah Devinney, Jenny Björklund, Henrik Björklund
The rise of concern around Natural Language Processing (NLP) technologies containing and perpetuating social biases has led to a rich and rapidly growing area of research.