Search Results for author: David Esiobu

Found 3 papers, 1 papers with code

ROBBIE: Robust Bias Evaluation of Large Generative Language Models

no code implementations29 Nov 2023 David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith

In this work, our focus is two-fold: (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity metrics across 12 demographic axes and 5 families of generative LLMs.

Benchmarking Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.