Search Results for author: Jeff Phillips

Found 5 papers, 3 papers with code

Quantized Wasserstein Procrustes Alignment of Word Embedding Spaces

no code implementations AMTA 2022 Prince O Aboagye, Yan Zheng, Michael Yeh, Junpeng Wang, Zhongfang Zhuang, Huiyuan Chen, Liang Wang, Wei zhang, Jeff Phillips

Optimal Transport (OT) provides a useful geometric framework to estimate the permutation matrix under unsupervised cross-lingual word embedding (CLWE) models that pose the alignment task as a Wasserstein-Procrustes problem.

Bilingual Lexicon Induction Quantization

Normalization of Language Embeddings for Cross-Lingual Alignment

1 code implementation NeurIPS 2021 Prince Osei Aboagye, Jeff Phillips, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei zhang, Liang Wang, Hao Yang

Learning a good transfer function to map the word vectors from two languages into a shared cross-lingual word vector space plays a crucial role in cross-lingual NLP.

Translation

On Measuring and Mitigating Biased Inferences of Word Embeddings

2 code implementations25 Aug 2019 Sunipa Dev, Tao Li, Jeff Phillips, Vivek Srikumar

Word embeddings carry stereotypical connotations from the text they are trained on, which can lead to invalid inferences in downstream models that rely on them.

Natural Language Inference Word Embeddings

Learning In Practice: Reasoning About Quantization

no code implementations27 May 2019 Annie Cherkaev, Waiming Tai, Jeff Phillips, Vivek Srikumar

There is a mismatch between the standard theoretical analyses of statistical machine learning and how learning is used in practice.

Quantization

Attenuating Bias in Word Vectors

1 code implementation23 Jan 2019 Sunipa Dev, Jeff Phillips

Word vector representations are well developed tools for various NLP and Machine Learning tasks and are known to retain significant semantic and syntactic structure of languages.

Cannot find the paper you are looking for? You can Submit a new open access paper.