Search Results for author: Jude Fernandes

Found 3 papers, 2 papers with code

ROBBIE: Robust Bias Evaluation of Large Generative Language Models

no code implementations29 Nov 2023 David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith

In this work, our focus is two-fold: (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity metrics across 12 demographic axes and 5 families of generative LLMs.

Benchmarking Fairness

Perturbation Augmentation for Fairer NLP

1 code implementation25 May 2022 Rebecca Qian, Candace Ross, Jude Fernandes, Eric Smith, Douwe Kiela, Adina Williams

Unwanted and often harmful social biases are becoming ever more salient in NLP research, affecting both models and datasets.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.