no code implementations • 26 Oct 2023 • Ahmed Magooda, Alec Helyar, Kyle Jackson, David Sullivan, Chad Atalla, Emily Sheng, Dan Vann, Richard Edgar, Hamid Palangi, Roman Lutz, Hongliang Kong, Vincent Yun, Eslam Kamal, Federico Zarfati, Hanna Wallach, Sarah Bird, Mei Chen
We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products and services.
1 code implementation • Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics 2023 • Eve Fleisig, Aubrie Amstutz, Chad Atalla, Su Lin Blodgett, Hal Daumé III, Alexandra Olteanu, Emily Sheng, Dan Vann, Hanna Wallach
It is critical to measure and mitigate fairness- related harms caused by AI text generation systems, including stereotyping and demeaning harms.
no code implementations • 7 Oct 2022 • Kyra Yee, Alice Schoenauer Sebag, Olivia Redfield, Emily Sheng, Matthias Eck, Luca Belli
Harmful content detection models tend to have higher false positive rates for content from marginalized groups.
no code implementations • 7 Aug 2021 • Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang
Recent studies show that Natural Language Processing (NLP) technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality.
no code implementations • NAACL 2021 • Emily Sheng, Kai-Wei Chang, Prem Natarajan, Nanyun Peng
Ad hominem attacks are those that target some feature of a person{'}s character instead of the position the person is maintaining.
1 code implementation • ACL 2021 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner.
1 code implementation • 18 Apr 2021 • Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, Nanyun Peng
Dialogue systems in the form of chatbots and personal assistants are being increasingly integrated into people's lives.
1 code implementation • GeBNLP (COLING) 2020 • Emily Sheng, David Uthus
There is a growing collection of work analyzing and mitigating societal biases in language understanding, generation, and retrieval tasks, though examining biases in creative tasks remains underexplored.
1 code implementation • 24 Oct 2020 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
We present a general approach towards controllable societal biases in natural language generation (NLG).
1 code implementation • IJCNLP 2019 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
We present a systematic study of biases in natural language generation (NLG) by analyzing text generated from prompts that contain mentions of different demographic groups.
1 code implementation • 22 Sep 2018 • Emily Sheng, Prem Natarajan
In biomedical literature, it is common for entity boundaries to not align with word boundaries.
no code implementations • WS 2017 • Jonathan Gordon, Stephen Aguilar, Emily Sheng, Gully Burns
Learners need to find suitable documents to read and prioritize them in an appropriate order.
no code implementations • WS 2017 • Emily Sheng, Prem Natarajan, Jonathan Gordon, Gully Burns
We refer to this learning utility as the "pedagogical value" of the document to the learner.