Gender Bias Detection

8 papers with code • 0 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Measuring Gender Bias in Word Embeddings across Domains and Discovering New Gender Bias Word Categories

alfredomg/GeBNLP2019 WS 2019

We find that some domains are definitely more prone to gender bias than others, and that the categories of gender bias present also vary for each set of word embeddings.

Matched sample selection with GANs for mitigating attribute confounding

csinva/matching-with-gans 24 Mar 2021

Measuring biases of vision systems with respect to protected attributes like gender and age is critical as these systems gain widespread use in society.

Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics

charan223/FairDeepLearning 8 Jun 2021

With the recent expanding attention of machine learning researchers and practitioners to fairness, there is a void of a common framework to analyze and compare the capabilities of proposed models in deep representation learning.

Quantifying Gender Biases Towards Politicians on Reddit

spaidataiga/redditpoliticalbias 22 Dec 2021

Rather than overt hostile or benevolent sexism, the results of the nominal and lexical analyses suggest this interest is not as professional or respectful as that expressed about male politicians.

Feature robustness and sex differences in medical imaging: a case study in MRI-based Alzheimer's disease detection

e-pet/adni-bias 4 Apr 2022

Instead, while logistic regression is fully robust to dataset composition, we find that CNN performance is generally improved for both male and female subjects when including more female subjects in the training dataset.

FairShap: A Data Re-weighting Approach for Algorithmic Fairness based on Shapley Values

AdrianArnaiz/fair-shap 3 Mar 2023

Algorithmic fairness is of utmost societal importance, yet the current trend in large-scale machine learning models requires training with massive datasets that are frequently biased.