Search Results for author: Hannah Devinney

Found 3 papers, 0 papers with code

Semi-Supervised Topic Modeling for Gender Bias Discovery in English and Swedish

no code implementations GeBNLP (COLING) 2020 Hannah Devinney, Jenny Björklund, Henrik Björklund

Gender bias has been identified in many models for Natural Language Processing, stemming from implicit biases in the text corpora used to train the models.

Topic Models

ACROCPoLis: A Descriptive Framework for Making Sense of Fairness

no code implementations19 Apr 2023 Andrea Aler Tubella, Dimitri Coelho Mollo, Adam Dahlgren Lindström, Hannah Devinney, Virginia Dignum, Petter Ericson, Anna Jonsson, Timotheus Kampik, Tom Lenaerts, Julian Alfredo Mendez, Juan Carlos Nieves

Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available.

Descriptive Fairness

Theories of "Gender" in NLP Bias Research

no code implementations5 May 2022 Hannah Devinney, Jenny Björklund, Henrik Björklund

The rise of concern around Natural Language Processing (NLP) technologies containing and perpetuating social biases has led to a rich and rapidly growing area of research.

Cannot find the paper you are looking for? You can Submit a new open access paper.