Search Results for author: Hadas Kotek

Found 7 papers, 1 papers with code

Protected group bias and stereotypes in Large Language Models

no code implementations21 Mar 2024 Hadas Kotek, David Q. Sun, Zidi Xiu, Margit Bowler, Christopher Klein

We conduct a two-part study: first, we solicit sentence continuations describing the occupations of individuals from different protected groups, including gender, sexuality, religion, and race.

Ethics Fairness +1

Gender bias and stereotypes in Large Language Models

no code implementations28 Aug 2023 Hadas Kotek, Rikker Dockum, David Q. Sun

Our contributions in this paper are as follows: (a) LLMs are 3-6 times more likely to choose an occupation that stereotypically aligns with a person's gender; (b) these choices align with people's perceptions better than with the ground truth as reflected in official job statistics; (c) LLMs in fact amplify the bias beyond what is reflected in perceptions or the ground truth; (d) LLMs ignore crucial ambiguities in sentence structure 95% of the time in our study items, but when explicitly prompted, they recognize the ambiguity; (e) LLMs provide explanations for their choices that are factually inaccurate and likely obscure the true reason behind their predictions.

Sentence

Feedback Effect in User Interaction with Intelligent Assistants: Delayed Engagement, Adaption and Drop-out

no code implementations17 Mar 2023 Zidi Xiu, Kai-Chen Cheng, David Q. Sun, Jiannan Lu, Hadas Kotek, Yuhan Zhang, Paul McCarthy, Christopher Klein, Stephen Pulman, Jason D. Williams

Next, we expand the time horizon to examine behavior changes and show that as users discover the limitations of the IA's understanding and functional capabilities, they learn to adjust the scope and wording of their requests to increase the likelihood of receiving a helpful response from the IA.

MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants

no code implementations13 Oct 2021 Alkesh Patel, Joel Ruben Antony Moniz, Roman Nguyen, Nick Tzou, Hadas Kotek, Vincent Renkens

In multimodal assistant, where vision is also one of the input modalities, the identification of user intent becomes a challenging task as visual input can influence the outcome.

intent-classification Intent Classification +4

Improving Human-Labeled Data through Dynamic Automatic Conflict Resolution

no code implementations COLING 2020 David Q. Sun, Hadas Kotek, Christopher Klein, Mayank Gupta, William Li, Jason D. Williams

This paper develops and implements a scalable methodology for (a) estimating the noisiness of labels produced by a typical crowdsourcing semantic annotation task, and (b) reducing the resulting error of the labeling process by as much as 20-30% in comparison to other common labeling strategies.

text-classification Text Classification

Generating Natural Questions from Images for Multimodal Assistants

no code implementations17 Nov 2020 Alkesh Patel, Akanksha Bindal, Hadas Kotek, Christopher Klein, Jason Williams

We evaluate our approach using standard evaluation metrics such as BLEU, METEOR, ROUGE, and CIDEr to show the relevance of generated questions with human-provided questions.

Common Sense Reasoning Natural Questions +4

Cannot find the paper you are looking for? You can Submit a new open access paper.