Chinese pre-trained language models usually exploit contextual character information to learn representations, while ignoring the linguistics knowledge, e. g., word and sentence information.
However, the popular OpenIE systems usually output facts sequentially in the way of predicting the next fact conditioned on the previous decoded ones, which enforce an unnecessary order on the facts and involve the error accumulation between autoregressive steps.
As a promising way, Cross-Domain Recommendation (CDR) has attracted a surge of interest, which aims to transfer the user preferences observed in the source domain to make recommendations in the target domain.
Meanwhile, rough reading is explored in a multi-round manner to discover undetected events, thus the multi-events problem is handled.
Distantly supervised named entity recognition (DS-NER) efficiently reduces labor costs but meanwhile intrinsically suffers from the label noise due to the strong assumption of distant supervision.
A temporal interaction network consists of a series of chronological interactions between users and items.
Event extraction (EE) is a crucial information extraction task that aims to extract event information in texts.
However, the global properties of bipartite graph, including community structures of homogeneous nodes and long-range dependencies of heterogeneous nodes, are not well preserved.
A heterogeneous graph attention networks is then introduced to propagate relational message and enrich information interaction.
Dependency trees have been shown to be effective in capturing long-range relations between target entities.
Document-level relation extraction (RE) poses new challenges over its sentence-level counterpart since it requires an adequate comprehension of the whole document and the multi-hop reasoning ability across multiple sentences to reach the final result.
To mitigate the issue, we propose in this paper a one-stage joint extraction model, namely, TPLinker, which is capable of discovering overlapping relations sharing one or both entities while immune from the exposure bias.
Ranked #2 on Relation Extraction on NYT11-HRL
Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i. e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries.
More recently, Named Entity Recognition hasachieved great advances aided by pre-trainingapproaches such as BERT.
We first build a representation extractor to derive features for unlabeled data from the target domain (no test data is necessary) and then group them with a cluster miner.
Previous studies on the task have verified the effectiveness of integrating syntactic dependency into graph convolutional networks.
Most Chinese pre-trained models take character as the basic unit and learn representation according to character's external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful utterance in Chinese.
Joint extraction of entities and relations aims to detect entity pairs along with their relations using a single model.
Ranked #1 on Relation Extraction on NYT-single
Relation extraction studies the issue of predicting semantic relations between pairs of entities in sentences.
Ranked #23 on Relation Extraction on TACRED