Search Results for author: Prasanna Parasurama

Found 3 papers, 1 papers with code

Gendered Language in Resumes and its Implications for Algorithmic Bias in Hiring

no code implementations NAACL (GeBNLP) 2022 Prasanna Parasurama, João Sedoc

Despite growing concerns around gender bias in NLP models used in algorithmic hiring, there is little empirical work studying the extent and nature of gendered language in resumes. Using a corpus of 709k resumes from IT firms, we train a series of models to classify the gender of the applicant, thereby measuring the extent of gendered information encoded in resumes. We also investigate whether it is possible to obfuscate gender from resumes by removing gender identifiers, hobbies, gender sub-space in embedding models, etc. We find that there is a significant amount of gendered information in resumes even after obfuscation. A simple Tf-Idf model can learn to classify gender with AUROC=0. 75, and more sophisticated transformer-based models achieve AUROC=0. 8. We further find that gender predictive values have low correlation with gender direction of embeddings – meaning that, what is predictive of gender is much more than what is “gendered” in the masculine/feminine sense. We discuss the algorithmic bias and fairness implications of these findings in the hiring context.

Fairness

Degendering Resumes for Fair Algorithmic Resume Screening

no code implementations16 Dec 2021 Prasanna Parasurama, João Sedoc

We investigate whether it is feasible to remove gendered information from resumes to mitigate potential bias in algorithmic resume screening.

Fairness

raceBERT -- A Transformer-based Model for Predicting Race and Ethnicity from Names

1 code implementation7 Dec 2021 Prasanna Parasurama

This paper presents raceBERT -- a transformer-based model for predicting race and ethnicity from character sequences in names, and an accompanying python package.

Cannot find the paper you are looking for? You can Submit a new open access paper.