no code implementations • 20 Nov 2023 • Theodora Worledge, Judy Hanwen Shen, Nicole Meister, Caleb Winston, Carlos Guestrin
As businesses, products, and services spring up around large language models, the trustworthiness of these models hinges on the verifiability of their outputs.
1 code implementation • 26 Oct 2023 • Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, Tatsunori B. Hashimoto
In contrast, the tendency for language models to memorize example order means that a contaminated language model will find certain canonical orderings to be much more likely than others.
no code implementations • 11 Sep 2022 • Amir Pouran Ben Veyseh, Nicole Meister, Franck Dernoncourt, Thien Huu Nguyen
Keyphrase extraction is one of the essential tasks for document understanding in NLP.
no code implementations • ICCV 2023 • Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky
Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models.
no code implementations • 15 Jun 2022 • Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky
Specifically, we develop a novel explanation framework ELUDE (Explanation via Labelled and Unlabelled DEcomposition) that decomposes a model's prediction into two parts: one that is explainable through a linear combination of the semantic attributes, and another that is dependent on the set of uninterpretable features.
no code implementations • COLING 2022 • Amir Pouran Ben Veyseh, Nicole Meister, Seunghyun Yoon, Rajiv Jain, Franck Dernoncourt, Thien Huu Nguyen
Acronym extraction is the task of identifying acronyms and their expanded forms in texts that is necessary for various NLP applications.
1 code implementation • 6 Dec 2021 • Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky
As AI technology is increasingly applied to high-impact, high-risk domains, there have been a number of new methods aimed at making AI models more human interpretable.
1 code implementation • RC 2020 • Sunnie S. Y. Kim, Sharon Zhang, Nicole Meister, Olga Russakovsky
The implementation of most (7 of 10) methods was straightforward, especially after we received additional details from the original authors.