XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation

3 Apr 2020Yaobo LiangNan DuanYeyun GongNing WuFenfei GuoWeizhen QiMing GongLinjun ShouDaxin JiangGuihong CaoXiaodong FanRuofei ZhangRahul AgrawalEdward CuiSining WeiTaroon BhartiYing QiaoJiun-Hung ChenWinnie WuShuguang LiuFan YangDaniel CamposRangan MajumderMing Zhou

In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks. Comparing to GLUE(Wang et al., 2019), which is labeled in English for natural language understanding tasks only, XGLUE has two main advantages: (1) it provides 11 diversified tasks that cover both natural language understanding and generation scenarios; (2) for each task, it provides labeled data in multiple languages... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.