Bias Detection
53 papers with code • 5 benchmarks • 8 datasets
Bias detection is the task of detecting and measuring racism, sexism and otherwise discriminatory behavior in a model (Source: https://stereoset.mit.edu/)
Most implemented papers
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
Furthermore, we develop two methods, Intersectional Bias Detection (IBD) and Emergent Intersectional Bias Detection (EIBD), to automatically identify the intersectional biases and emergent intersectional biases from static word embeddings in addition to measuring them in contextualized word embeddings.
Corpora Evaluation and System Bias Detection in Multi-document Summarization
Owing to no standard definition of the task, we encounter a plethora of datasets with varying levels of overlap and conflict between participating documents.
LOGAN: Local Group Bias Detection by Clustering
Machine learning techniques have been widely used in natural language processing (NLP).
Detecting Media Bias in News Articles using Gaussian Bias Distributions
In particular, we utilize the probability distributions of the frequency, positions, and sequential order of lexical and informational sentence-level bias in a Gaussian Mixture Model.
Context in Informational Bias Detection
We find that the best-performing context-inclusive model outperforms the baseline on longer sentences, and sentences from politically centrist articles.
fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation
The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model.
Exploring Visual Engagement Signals for Representation Learning
Visual engagement in social media platforms comprises interactions with photo posts including comments, shares, and likes.
Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics
With the recent expanding attention of machine learning researchers and practitioners to fairness, there is a void of a common framework to analyze and compare the capabilities of proposed models in deep representation learning.
Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques
A common core assumption of these techniques is that the main model handles biased instances similarly to the biased model, in that it will resort to biases whenever available.
Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
We present Amazon SageMaker Clarify, an explainability feature for Amazon SageMaker that launched in December 2020, providing insights into data and ML models by identifying biases and explaining predictions.