Search Results for author: Virginia K. Felkner

Found 2 papers, 1 papers with code

WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models

1 code implementation26 Jun 2023 Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May

We present WinoQueer: a benchmark specifically designed to measure whether large language models (LLMs) encode biases that are harmful to the LGBTQ+ community.

Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models

no code implementations23 Jun 2022 Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May

This paper presents exploratory work on whether and to what extent biases against queer and trans people are encoded in large language models (LLMs) such as BERT.

Bias Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.