no code implementations • 6 Apr 2022 • Sheng-Fu Wang, Shu-Hang Liu, Tian-Yi Che, Yi-Fan Lu, Song-Xiao Yang, Heyan Huang, Xian-Ling Mao
Specifically, taking a paper as a basic and separate unit, existing PDF Readers cannot access extended information about the paper, such as corresponding videos, blogs and codes.
4 code implementations • TACL 2020 • Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, Samuel R. Bowman
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLiMP), a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English.
1 code implementation • IJCNLP 2019 • Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretič, Samuel R. Bowman
We conclude that a variety of methods is necessary to reveal all relevant aspects of a model's grammatical knowledge in a given domain.
no code implementations • 15 Sep 2018 • Vishwali Mhasawade, Ildikó Emese Szabó, Melanie Tosik, Sheng-Fu Wang
In this work, we investigate whether the learnability bias exhibited by children is in part due to the distribution of quantifiers in natural language.
2 code implementations • CONLL 2018 • WooJin Chung, Sheng-Fu Wang, Samuel R. Bowman
Tree-structured neural network architectures for sentence encoding draw inspiration from the approach to semantic composition generally seen in formal linguistics, and have shown empirical improvements over comparable sequence models by doing so.