Bias Detection
54 papers with code • 5 benchmarks • 8 datasets
Bias detection is the task of detecting and measuring racism, sexism and otherwise discriminatory behavior in a model (Source: https://stereoset.mit.edu/)
Latest papers with no code
Extending Variability-Aware Model Selection with Bias Detection in Machine Learning Projects
ML model selection depends on several factors, which include data-related attributes such as sample size, functional requirements such as the prediction algorithm type, and non-functional requirements such as performance and bias.
Current Topological and Machine Learning Applications for Bias Detection in Text
Institutional bias can impact patient outcomes, educational attainment, and legal system navigation.
Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset
Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature.
Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models
Sixty proposed various strategies for mitigating biases, especially targeting implicit and selection biases.
Target-Aware Contextual Political Bias Detection in News
Sentence-level political bias detection in news is no exception, and has proven to be a challenging task that requires an understanding of bias in consideration of the context.
Unlocking Bias Detection: Leveraging Transformer-Based Models for Content Analysis
Bias detection in text is crucial for combating the spread of negative stereotypes, misinformation, and biased decision-making.
Unsupervised Bias Detection in College Student Newspapers
This paper presents a pipeline with minimal human influence for scraping and detecting bias on college newspaper archives.
Auditing Predictive Models for Intersectional Biases
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes.
Language-Agnostic Bias Detection in Language Models with Bias Probing
For nationality as a case study, we show that LABDet `surfaces' nationality bias by training a classifier on top of a frozen PLM on non-nationality sentiment detection.
Disentangling Structure and Style: Political Bias Detection in News by Inducing Document Hierarchy
Our approach overcomes this limitation by considering both the sentence-level semantics and the document-level rhetorical structure, resulting in a more robust and style-agnostic approach to detecting political bias in news articles.