Search Results for author: Ioana Baldini

Found 13 papers, 2 papers with code

Biomedical Interpretable Entity Representations

2 code implementations Findings (ACL) 2021 Diego Garcia-Olano, Yasumasa Onoe, Ioana Baldini, Joydeep Ghosh, Byron C. Wallace, Kush R. Varshney

Pre-trained language models induce dense entity representations that offer strong performance on entity-centric NLP tasks, but such representations are not immediately interpretable.

Entity Disambiguation Representation Learning

How Data Scientists Work Together With Domain Experts in Scientific Collaborations: To Find The Right Answer Or To Ask The Right Question?

no code implementations8 Sep 2019 Yaoli Mao, Dakuo Wang, Michael Muller, Kush R. Varshney, Ioana Baldini, Casey Dugan, AleksandraMojsilović

Our findings suggest that besides the glitches in the collaboration readiness, technology readiness, and coupling of work dimensions, the tensions that exist in the common ground building process influence the collaboration outcomes, and then persist in the actual collaboration process.

Your fairness may vary: Pretrained language model fairness in toxic text classification

no code implementations Findings (ACL) 2022 Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Mikhail Yurochkin, Moninder Singh

Through the analysis of more than a dozen pretrained language models of varying sizes on two toxic text classification tasks (English), we demonstrate that focusing on accuracy measures alone can lead to models with wide variation in fairness characteristics.

Fairness Language Modelling +2

Downstream Fairness Caveats with Synthetic Healthcare Data

no code implementations9 Mar 2022 Karan Bhanot, Ioana Baldini, Dennis Wei, Jiaming Zeng, Kristin P. Bennett

In this paper, we evaluate the fairness of models generated on two healthcare datasets for gender and race biases.

Fairness Generative Adversarial Network

Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations

no code implementations8 May 2022 Hammaad Adam, Ming Ying Yang, Kenrick Cato, Ioana Baldini, Charles Senteio, Leo Anthony Celi, Jiaming Zeng, Moninder Singh, Marzyeh Ghassemi

In this study, we investigate the level of implicit race information available to ML models and human experts and the implications of model-detectable differences in clinical notes.

Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models

no code implementations22 May 2023 Ioana Baldini, Chhavi Yadav, Payel Das, Kush R. Varshney

Bias auditing is further complicated by LM brittleness: when a presumably biased outcome is observed, is it due to model bias or model brittleness?

Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset

no code implementations15 Nov 2023 Brooklyn Sheppard, Anna Richter, Allison Cohen, Elizabeth Allyn Smith, Tamara Kneese, Carolyne Pelletier, Ioana Baldini, Yue Dong

Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature.

Bias Detection Text Generation

SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models

no code implementations12 Dec 2023 Manish Nagireddy, Lamogha Chiazor, Moninder Singh, Ioana Baldini

Current datasets for unwanted social bias auditing are limited to studying protected demographic features such as race and gender.

Question Answering

Fairness-Aware Structured Pruning in Transformers

1 code implementation24 Dec 2023 Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar

The increasing size of large language models (LLMs) has introduced challenges in their training and inference.

Fairness Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.