no code implementations • CL (ACL) 2021 • Yang Trista Cao, Hal Daumé III
Such inferences raise the risk of systematic biases in coreference resolution systems, including biases that can harm binary and non-binary trans and cis stakeholders.
1 code implementation • 12 Dec 2023 • Yang Trista Cao, Anna Sotnikova, Jieyu Zhao, Linda X. Zou, Rachel Rudinger, Hal Daume III
We evaluate human stereotypes and stereotypical associations manifested in multilingual large language models such as mBERT, mT5, and ChatGPT.
no code implementations • 14 Nov 2023 • Yang Trista Cao, Lovely-Frances Domingo, Sarah Ann Gilbert, Michelle Mazurek, Katie Shilton, Hal Daumé III
Extensive efforts in automated approaches for content moderation have been focused on developing models to identify toxic, offensive, and hateful content with the aim of lightening the load for moderators.
1 code implementation • 26 Oct 2022 • Yang Trista Cao, Kyle Seelman, Kyungjun Lee, Hal Daumé III
We aim to answer this question by evaluating discrepancies between machine "understanding" datasets (VQA-v2) and accessibility datasets (VizWiz) by evaluating a variety of VQA models.
1 code implementation • NAACL 2022 • Yang Trista Cao, Anna Sotnikova, Hal Daumé III, Rachel Rudinger, Linda Zou
NLP models trained on text have been shown to reproduce human stereotypes, which can magnify harms to marginalized groups when systems are deployed at scale.
no code implementations • ACL 2022 • Yang Trista Cao, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta, Varun Kumar, Jwala Dhamala, Aram Galstyan
Multiple metrics have been introduced to measure fairness in various natural language processing tasks.
1 code implementation • ACL 2020 • Yang Trista Cao, Hal Daumé III
Correctly resolving textual mentions of people fundamentally entails making inferences about those people.
no code implementations • WS 2019 • Yang Trista Cao, Sudha Rao, Hal Daum{\'e} III
Unlike comprehension-style questions, clarification questions look for some missing information in a given context.