1 code implementation • NAACL 2019 • Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, Rachel Rudinger
The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017).
no code implementations • NAACL 2019 • Shikha Bordia, Samuel R. Bowman
Many text corpora exhibit socially problematic biases, which can be propagated or amplified in the models trained on such data.
1 code implementation • IJCNLP 2019 • Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretič, Samuel R. Bowman
We conclude that a variety of methods is necessary to reveal all relevant aspects of a model's grammatical knowledge in a given domain.
1 code implementation • 27 Nov 2019 • Phu Mon Htut, Jason Phang, Shikha Bordia, Samuel R. Bowman
We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Yichen Jiang, Shikha Bordia, Zheng Zhong, Charles Dognin, Maneesh Singh, Mohit Bansal
We introduce HoVer (HOppy VERification), a dataset for many-hop evidence extraction and fact verification.
no code implementations • NAACL (sdp) 2021 • Yash Gupta, Pawan Sasanka Ammanamanchi, Shikha Bordia, Arjun Manoharan, Deepak Mittal, Ramakanth Pasunuru, Manish Shrivastava, Maneesh Singh, Mohit Bansal, Preethi Jyothi
Large pretrained models have seen enormous success in extractive summarization tasks.