Relation Extraction Models

Our model known as CubeRE first encodes each input sentence using a language model encoder to obtain the contextualized sequence representation. We then capture the interaction between each possible head and tail entity as a pair representation for predicting the entity-relation label scores. To reduce the computational cost, each sentence is pruned to retain only words that have higher entity scores. Finally, we capture the interaction between each possible relation triplet and qualifier to predict the qualifier label scores and decode the outputs.

Source: A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
graph construction 1 33.33%
Hyper-Relational Extraction 1 33.33%
Relation Extraction 1 33.33%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories