1 code implementation • Findings (NAACL) 2022 • Zhao Meng, Yihan Dong, Mrinmaya Sachan, Roger Wattenhofer
In this paper, we present an approach to improve the robustness of BERT language models against word substitution-based adversarial attacks by leveraging adversarial perturbations for self-supervised contrastive learning.
no code implementations • 23 Sep 2018 • Kaiyu Chen, Yihan Dong, Xipeng Qiu, Zitian Chen
With curriculum learning, our model can deal with a complex arithmetic expression calculation with the deep hierarchical structure of skill models.