Search Results for author: Bhavya Ghai

Found 9 papers, 4 papers with code

Towards Fair and Explainable AI using a Human-Centered AI Approach

1 code implementation12 Jun 2023 Bhavya Ghai

The rise of machine learning (ML) is accompanied by several high-profile cases that have stressed the need for fairness, accountability, explainability and trust in ML systems.

Fairness Word Embeddings

D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias

no code implementations10 Aug 2022 Bhavya Ghai, Klaus Mueller

A user can detect the presence of bias against a group, say females, or a subgroup, say black females, by identifying unfair causal relationships in the causal network and using an array of fairness metrics.

Fairness

Cascaded Debiasing: Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

1 code implementation8 Feb 2022 Bhavya Ghai, Mihir Mishra, Klaus Mueller

Lastly, we offer a list of combinations of interventions that perform best for different fairness and utility metrics to aid the design of fair ML pipelines.

Fairness

Fluent: An AI Augmented Writing Tool for People who Stutter

1 code implementation23 Aug 2021 Bhavya Ghai, Klaus Mueller

On hovering over any such word, Fluent presents a set of alternative words which have similar meaning but are easier to speak.

Active Learning

WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings

1 code implementation5 Mar 2021 Bhavya Ghai, Md Naimul Hoque, Klaus Mueller

In this work, we present WordBias, an interactive visual tool designed to explore biases against intersectional groups encoded in static word embeddings.

Word Embeddings

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

no code implementations6 Sep 2020 Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller

The similarity score between feature rankings provided by the annotator and the local model explanation is used to assign a weight to each corresponding committee model.

Active Learning Feature Importance

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

no code implementations24 Jan 2020 Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller

We conducted an empirical study comparing the model learning outcomes, feedback content and experience with XAL, to that of traditional AL and coactive learning (providing the model's prediction without the explanation).

Active Learning Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.