Despite their ability to capture large amount of knowledge during pretraining, large-scale language models often benefit from incorporating external knowledge bases, especially on commonsense reasoning tasks.
Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.
In this paper, we first verify the assumption that clinical variables could have time-varying effects on COVID-19 outcomes.
Therefore, we manually correct these label mistakes and form a cleaner test set.
Ranked #2 on Named Entity Recognition on CoNLL++ (using extra training data)
In this paper, we formulate phrase grounding as a sequence labeling task where we treat candidate regions as potential labels, and use neural chain Conditional Random Fields (CRFs) to model dependencies among regions for adjacent mentions.