Search Results for author: Elias Benussi

Found 2 papers, 2 papers with code

Individual Fairness Guarantees for Neural Networks

1 code implementation11 May 2022 Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska

We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).

Benchmarking Fairness

Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models

1 code implementation NeurIPS 2021 Hannah Kirk, Yennie Jun, Haider Iqbal, Elias Benussi, Filippo Volpin, Frederic A. Dreyer, Aleksandar Shtedritski, Yuki M. Asano

Using a template-based data collection pipeline, we collect 396K sentence completions made by GPT-2 and find: (i) The machine-predicted jobs are less diverse and more stereotypical for women than for men, especially for intersections; (ii) Intersectional interactions are highly relevant for occupational associations, which we quantify by fitting 262 logistic models; (iii) For most occupations, GPT-2 reflects the skewed gender and ethnicity distribution found in US Labor Bureau data, and even pulls the societally-skewed distribution towards gender parity in cases where its predictions deviate from real labor market observations.

Language Modelling Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.