Bias Detection

54 papers with code • 5 benchmarks • 8 datasets

Bias detection is the task of detecting and measuring racism, sexism and otherwise discriminatory behavior in a model (Source: https://stereoset.mit.edu/)

Latest papers with no code

Extending Variability-Aware Model Selection with Bias Detection in Machine Learning Projects

no code yet • 23 Nov 2023

ML model selection depends on several factors, which include data-related attributes such as sample size, functional requirements such as the prediction algorithm type, and non-functional requirements such as performance and bias.

Current Topological and Machine Learning Applications for Bias Detection in Text

no code yet • 22 Nov 2023

Institutional bias can impact patient outcomes, educational attainment, and legal system navigation.

Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset

no code yet • 15 Nov 2023

Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature.

Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models

no code yet • 30 Oct 2023

Sixty proposed various strategies for mitigating biases, especially targeting implicit and selection biases.

Target-Aware Contextual Political Bias Detection in News

no code yet • 2 Oct 2023

Sentence-level political bias detection in news is no exception, and has proven to be a challenging task that requires an understanding of bias in consideration of the context.

Unlocking Bias Detection: Leveraging Transformer-Based Models for Content Analysis

no code yet • 30 Sep 2023

Bias detection in text is crucial for combating the spread of negative stereotypes, misinformation, and biased decision-making.

Unsupervised Bias Detection in College Student Newspapers

no code yet • 11 Sep 2023

This paper presents a pipeline with minimal human influence for scraping and detecting bias on college newspaper archives.

Auditing Predictive Models for Intersectional Biases

no code yet • 22 Jun 2023

Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes.

Language-Agnostic Bias Detection in Language Models with Bias Probing

no code yet • 22 May 2023

For nationality as a case study, we show that LABDet `surfaces' nationality bias by training a classifier on top of a frozen PLM on non-nationality sentiment detection.

Disentangling Structure and Style: Political Bias Detection in News by Inducing Document Hierarchy

no code yet • 5 Apr 2023

Our approach overcomes this limitation by considering both the sentence-level semantics and the document-level rhetorical structure, resulting in a more robust and style-agnostic approach to detecting political bias in news articles.