On the Robustness of Language Encoders against Grammatical Errors

ACL 2020 Fan YinQuanyu LongTao MengKai-Wei Chang

We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors. Specifically, we collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data... (read more)

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper