Gender Bias in Contextualized Word Embeddings

NAACL 2019 Jieyu ZhaoTianlu WangMark YatskarRyan CotterellVicente OrdonezKai-Wei Chang

In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo's contextualized word vectors. First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2) the trained ELMo embeddings systematically encode gender information and (3) ELMo unequally encodes gender information about male and female entities... (read more)

PDF Abstract NAACL 2019 PDF NAACL 2019 Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper