Search Results for author: Ninareh Mehrabi

Found 9 papers, 6 papers with code

Robust Conversational Agents against Imperceptible Toxicity Triggers

1 code implementation5 May 2022 Ninareh Mehrabi, Ahmad Beirami, Fred Morstatter, Aram Galstyan

Existing work to generate such attacks is either based on human-generated attacks which is costly and not scalable or, in case of automatic attacks, the attack vector does not conform to human-like language, which can be detected using a language model loss.

Language Modelling Text Generation

Towards Multi-Objective Statistically Fair Federated Learning

no code implementations24 Jan 2022 Ninareh Mehrabi, Cyprien de Lichy, John McKay, Cynthia He, William Campbell

With this goal in mind, we conduct studies to show that FL is able to satisfy different fairness metrics under different data regimes consisting of different types of clients.

Data Poisoning Fairness +1

Attributing Fair Decisions with Attention Interventions

no code implementations8 Sep 2021 Ninareh Mehrabi, Umang Gupta, Fred Morstatter, Greg Ver Steeg, Aram Galstyan

The widespread use of Artificial Intelligence (AI) in consequential domains, such as healthcare and parole decision-making systems, has drawn intense scrutiny on the fairness of these methods.

Decision Making Fairness

Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources

no code implementations EMNLP 2021 Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan

In addition, we analyze two downstream models that use ConceptNet as a source for commonsense knowledge and find the existence of biases in those models as well.

Exacerbating Algorithmic Bias through Fairness Attacks

1 code implementation16 Dec 2020 Ninareh Mehrabi, Muhammad Naveed, Fred Morstatter, Aram Galstyan

Algorithmic fairness has attracted significant attention in recent years, with many quantitative measures suggested for characterizing the fairness of different machine learning algorithms.

Adversarial Attack Data Poisoning +1

Statistical Equity: A Fairness Classification Objective

1 code implementation14 May 2020 Ninareh Mehrabi, Yuzhong Huang, Fred Morstatter

We formalize our definition of fairness, and motivate it with its appropriate contexts.

Classification Fairness +1

Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition

1 code implementation24 Oct 2019 Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, Aram Galstyan

We study the bias in several state-of-the-art named entity recognition (NER) models---specifically, a difference in the ability to recognize male and female names as PERSON entity types.

Named Entity Recognition NER

A Survey on Bias and Fairness in Machine Learning

2 code implementations23 Aug 2019 Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan

With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them.


Cannot find the paper you are looking for? You can Submit a new open access paper.