Jointly Modeling Hierarchical and Horizontal Features for Relational Triple Extraction

23 Aug 2019  ·  Zhepei Wei, Yantao Jia, Yuan Tian, Mohammad Javad Hosseini, Sujian Li, Mark Steedman, Yi Chang ·

Recent works on relational triple extraction have shown the superiority of jointly extracting entities and relations over the pipelined extraction manner. However, most existing joint models fail to balance the modeling of entity features and the joint decoding strategy, and thus the interactions between the entity level and triple level are not fully investigated. In this work, we first introduce the hierarchical dependency and horizontal commonality between the two levels, and then propose an entity-enhanced dual tagging framework that enables the triple extraction (TE) task to utilize such interactions with self-learned entity features through an auxiliary entity extraction (EE) task, without breaking the joint decoding of relational triples. Specifically, we align the EE and TE tasks in a position-wise manner by formulating them as two sequence labeling problems with identical encoder-decoder structure. Moreover, the two tasks are organized in a carefully designed parameter sharing setting so that the learned entity features could be naturally shared via multi-task learning. Empirical experiments on the NYT benchmark demonstrate the effectiveness of the proposed framework compared to the state-of-the-art methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here