Search Results for author: Nicole Meister

Found 8 papers, 3 papers with code

Unifying Corroborative and Contributive Attributions in Large Language Models

no code implementations20 Nov 2023 Theodora Worledge, Judy Hanwen Shen, Nicole Meister, Caleb Winston, Carlos Guestrin

As businesses, products, and services spring up around large language models, the trustworthiness of these models hinges on the verifiability of their outputs.

Language Modelling Large Language Model +1

Proving Test Set Contamination in Black Box Language Models

1 code implementation26 Oct 2023 Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, Tatsunori B. Hashimoto

In contrast, the tendency for language models to memorize example order means that a contaminated language model will find certain canonical orderings to be much more likely than others.

Language Modelling

Gender Artifacts in Visual Datasets

no code implementations ICCV 2023 Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models.

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

no code implementations15 Jun 2022 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky

Specifically, we develop a novel explanation framework ELUDE (Explanation via Labelled and Unlabelled DEcomposition) that decomposes a model's prediction into two parts: one that is explainable through a linear combination of the semantic attributes, and another that is dependent on the set of uninterpretable features.

Attribute

MACRONYM: A Large-Scale Dataset for Multilingual and Multi-Domain Acronym Extraction

no code implementations COLING 2022 Amir Pouran Ben Veyseh, Nicole Meister, Seunghyun Yoon, Rajiv Jain, Franck Dernoncourt, Thien Huu Nguyen

Acronym extraction is the task of identifying acronyms and their expanded forms in texts that is necessary for various NLP applications.

HIVE: Evaluating the Human Interpretability of Visual Explanations

1 code implementation6 Dec 2021 Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

As AI technology is increasingly applied to high-impact, high-risk domains, there have been a number of new methods aimed at making AI models more human interpretable.

Decision Making

[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias

1 code implementation RC 2020 Sunnie S. Y. Kim, Sharon Zhang, Nicole Meister, Olga Russakovsky

The implementation of most (7 of 10) methods was straightforward, especially after we received additional details from the original authors.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.