In this paper, we propose a scoring model to automatically learn a model-based reward, and an effective training strategy based on curriculum learning is further proposed to stabilize the training process.
Specifically, we focus on layer tuning for feed-forward network in the Transformer, namely FL-tuning.
Knowledge graphs (KGs) are an important source repository for a wide range of applications and rule mining from KGs recently attracts wide research interest in the KG-related research community.
Next, we formulate the problem of relation extraction into as a positive unlabeled learning task to alleviate false negative problem.
Ranked #1 on Relation Extraction on NYT11-HRL
For the purpose of measuring the importance of knowledge, we introduce attention mechanisms and propose deep Short Text Classification with Knowledge powered Attention (STCKA).