Search Results for author: Moninder Singh

Found 15 papers, 2 papers with code

Reasoning about concepts with LLMs: Inconsistencies abound

no code implementations30 May 2024 Rosario Uceda-Sosa, Karthikeyan Natesan Ramamurthy, Maria Chang, Moninder Singh

The ability to summarize and organize knowledge into abstract concepts is key to learning and reasoning.

SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models

no code implementations12 Dec 2023 Manish Nagireddy, Lamogha Chiazor, Moninder Singh, Ioana Baldini

Current datasets for unwanted social bias auditing are limited to studying protected demographic features such as race and gender.

Question Answering

Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions

no code implementations17 Feb 2023 Manish Nagireddy, Moninder Singh, Samuel C. Hoffman, Evaline Ju, Karthikeyan Natesan Ramamurthy, Kush R. Varshney

In this paper, focusing specifically on compositions of functions arising from the different pillars, we aim to reduce this gap, develop new insights for trustworthy ML, and answer questions such as the following.

Adversarial Robustness Fairness +1

Anomaly Attribution with Likelihood Compensation

no code implementations23 Aug 2022 Tsuyoshi Idé, Amit Dhurandhar, Jiří Navrátil, Moninder Singh, Naoki Abe

In either case, one would ideally want to compute a ``responsibility score'' indicative of the extent to which an input variable is responsible for the anomalous output.

Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations

no code implementations8 May 2022 Hammaad Adam, Ming Ying Yang, Kenrick Cato, Ioana Baldini, Charles Senteio, Leo Anthony Celi, Jiaming Zeng, Moninder Singh, Marzyeh Ghassemi

In this study, we investigate the level of implicit race information available to ML models and human experts and the implications of model-detectable differences in clinical notes.

An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness

no code implementations29 Sep 2021 Moninder Singh, Gevorg Ghalachyan, Kush R. Varshney, Reginald E. Bryant

To ensure trust in AI models, it is becoming increasingly apparent that evaluation of models must be extended beyond traditional performance metrics, like accuracy, to other dimensions, such as fairness, explainability, adversarial robustness, and distribution shift.

Adversarial Robustness Fairness

Your fairness may vary: Pretrained language model fairness in toxic text classification

no code implementations Findings (ACL) 2022 Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Mikhail Yurochkin, Moninder Singh

Through the analysis of more than a dozen pretrained language models of varying sizes on two toxic text classification tasks (English), we demonstrate that focusing on accuracy measures alone can lead to models with wide variation in fairness characteristics.

Fairness Language Modelling +2

Understanding racial bias in health using the Medical Expenditure Panel Survey data

no code implementations4 Nov 2019 Moninder Singh, Karthikeyan Natesan Ramamurthy

Over the years, several studies have demonstrated that there exist significant disparities in health indicators in the United States population across various groups.


Interpretable Multi-Objective Reinforcement Learning through Policy Orchestration

no code implementations21 Sep 2018 Ritesh Noothigattu, Djallel Bouneffouf, Nicholas Mattei, Rachita Chandra, Piyush Madan, Kush Varshney, Murray Campbell, Moninder Singh, Francesca Rossi

To ensure that agents behave in ways aligned with the values of the societies in which they operate, we must develop techniques that allow these agents to not only maximize their reward in an environment, but also to learn and follow the implicit constraints of society.

Multi-Objective Reinforcement Learning reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.