Search Results for author: Yennie Jun

Found 3 papers, 1 papers with code

Trusted Source Alignment in Large Language Models

no code implementations12 Nov 2023 Vasilisa Bashlovkina, Zhaobin Kuang, Riley Matthews, Edward Clifford, Yennie Jun, William W. Cohen, Simon Baumgartner

Large language models (LLMs) are trained on web-scale corpora that inevitably include contradictory factual information from sources of varying reliability.

Fact Checking

Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models

1 code implementation NeurIPS 2021 Hannah Kirk, Yennie Jun, Haider Iqbal, Elias Benussi, Filippo Volpin, Frederic A. Dreyer, Aleksandar Shtedritski, Yuki M. Asano

Using a template-based data collection pipeline, we collect 396K sentence completions made by GPT-2 and find: (i) The machine-predicted jobs are less diverse and more stereotypical for women than for men, especially for intersections; (ii) Intersectional interactions are highly relevant for occupational associations, which we quantify by fitting 262 logistic models; (iii) For most occupations, GPT-2 reflects the skewed gender and ethnicity distribution found in US Labor Bureau data, and even pulls the societally-skewed distribution towards gender parity in cases where its predictions deviate from real labor market observations.

Language Modelling Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.