Multitask Learning of Negation and Speculation using Transformers

Detecting negation and speculation in language has been a task of considerable interest to the biomedical community, as it is a key component of Information Extraction systems from Biomedical documents. Prior work has individually addressed Negation Detection and Speculation Detection, and both have been addressed in the same way, using 2 stage pipelined approach: Cue Detection followed by Scope Resolution... (read more)

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
BPE
Subword Segmentation
Linear Warmup With Linear Decay
Learning Rate Schedules
Residual Connection
Skip Connections
Layer Normalization
Normalization
Adam
Stochastic Optimization
Multi-Head Attention
Attention Modules
SentencePiece
Tokenizers
Attention Dropout
Regularization
Dense Connections
Feedforward Networks
Softmax
Output Functions
XLNet
Transformers
WordPiece
Subword Segmentation
GELU
Activation Functions
Scaled Dot-Product Attention
Attention Mechanisms
Dropout
Regularization
Weight Decay
Regularization
BERT
Language Models
RoBERTa
Transformers