no code implementations • SEMEVAL 2021 • Gang Rao, Maochang Li, Xiaolong Hou, Lianxin Jiang, Yang Mo, Jianping Shen
In this paper we propose a contextual attention based model with two-stage fine-tune training using RoBERTa.
Lexical Complexity Prediction Task 2