LERT: A Linguistically-motivated Pre-trained Language Model

10 Nov 2022  ·  Yiming Cui, Wanxiang Che, Shijin Wang, Ting Liu ·

Pre-trained Language Model (PLM) has become a representative foundation model in the natural language processing field. Most PLMs are trained with linguistic-agnostic pre-training tasks on the surface form of the text, such as the masked language model (MLM). To further empower the PLMs with richer linguistic features, in this paper, we aim to propose a simple but effective way to learn linguistic features for pre-trained language models. We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy. We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements over various comparable baselines. Furthermore, we also conduct analytical experiments in various linguistic aspects, and the results prove that the design of LERT is valid and effective. Resources are available at https://github.com/ymcui/LERT

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Stock Market Prediction Astock Chinese Lert Large (News+Factors) Accuray 66.36 # 6
F1-score 66.16 # 6
Recall 66.69 # 6
Precision 66.40 # 6
Stock Market Prediction Astock Chinese Lert Large (News) Accuray 64.37 # 9
F1-score 64.30 # 9
Recall 64.31 # 9
Precision 64.34 # 9

Methods


No methods listed for this paper. Add relevant methods here