Search Results for author: Shikha Bordia

Found 6 papers, 3 papers with code

Do Attention Heads in BERT Track Syntactic Dependencies?

1 code implementation27 Nov 2019 Phu Mon Htut, Jason Phang, Shikha Bordia, Samuel R. Bowman

We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.


Identifying and Reducing Gender Bias in Word-Level Language Models

no code implementations NAACL 2019 Shikha Bordia, Samuel R. Bowman

Many text corpora exhibit socially problematic biases, which can be propagated or amplified in the models trained on such data.

Language Modelling

On Measuring Social Biases in Sentence Encoders

1 code implementation NAACL 2019 Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, Rachel Rudinger

The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017).

Sentence Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.