Search Results for author: Emeralda Sesari

Found 1 papers, 0 papers with code

An Empirical Study on the Fairness of Pre-trained Word Embeddings

no code implementations NAACL (GeBNLP) 2022 Emeralda Sesari, Max Hort, Federica Sarro

Pre-trained word embedding models are easily distributed and applied, as they alleviate users from the effort to train models themselves.

Fairness Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.