no code implementations • NAACL (GeBNLP) 2022 • Emeralda Sesari, Max Hort, Federica Sarro
Pre-trained word embedding models are easily distributed and applied, as they alleviate users from the effort to train models themselves.
Fairness Word Embeddings