Considering that summarization tasks aim at acquiring concise expressions of synoptical information from the longer context, these tasks naturally align with the objective of RE, i. e., extracting a kind of synoptical information that describes the relation of entity mentions.
Ranked #4 on Relation Extraction on TACRED
In this paper, we propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method that guides the RE models to focus on the main effects of textual context without losing the entity information.
GRAPHCACHE aggregates the features from sentences in the whole dataset to learn global representations of properties, and use them to augment the local features within individual sentences.
Current question answering (QA) systems primarily consider the single-answer scenario, where each question is assumed to be paired with one correct answer.
We propose the Offline Distillation Pipeline to break this trade-off by separating the training procedure into an online interaction phase and an offline distillation phase. Second, we find that training with the imbalanced off-policy data from multiple environments across the lifetime creates a significant performance drop.
To achieve this, it is crucial to represent multilingual knowledge in a shared/unified space.
Recent information extraction approaches have relied on training deep neural models.
Ranked #1 on Named Entity Recognition on CoNLL++
Sentence-level relation extraction (RE) aims at identifying the relationship between two entities in a sentence.
Ranked #2 on Relation Extraction on Re-TACRED
The goal of offline reinforcement learning is to learn a policy from a fixed dataset, without further interactions with the environment.
In this paper, we propose two novel techniques, adaptive thresholding and localized context pooling, to solve the multi-label and multi-entity problems.
Ranked #6 on Relation Extraction on CDR
In this work, we investigate a novel instantiation of H-step lookahead with a learned model and a terminal value function learned by a model-free off-policy algorithm, named Learning Off-Policy with Online Planning (LOOP).
Our objective function is to predict an opinion word given a target word while our ultimate goal is to learn a sentiment classifier.
Fine-tuning pre-trained language models (PTLMs), such as BERT and its better variant RoBERTa, has been a common practice for advancing performance in natural language understanding (NLU) tasks.
While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive.
The soft matching module learns to match rules with semantically similar sentences such that raw corpora can be automatically labeled and leveraged by the RE module (in a much better coverage) as augmented supervision, in addition to the exactly matched sentences.
A key challenge in reinforcement learning (RL) is environment generalization: a policy trained to solve a task in one environment often fails to solve the same task in a slightly different test environment.
These word pairs can be extracted by using dependency parsers and simple rules.
Due to the ability of encoding and mapping semantic information into a high-dimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent.
As the operation of SWEET is not bound to specific email providers we argue that a censor will need to block all email communications in order to disrupt SWEET, which is infeasible as email constitutes an important part of today's Internet.
Cryptography and Security