Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.
+-----------+ | | I voted for Obama because he was most aligned with my values", she said. | | | +-------------------------------------------------+------------+
"I", "my", and "she" belong to the same cluster and "Obama" and "he" belong to the same cluster.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.
Ranked #1 on Question Answering on SQuAD1.1 dev
COMMON SENSE REASONING COREFERENCE RESOLUTION LINGUISTIC ACCEPTABILITY NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE NATURAL LANGUAGE UNDERSTANDING QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS WORD SENSE DISAMBIGUATION
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
Ranked #2 on Citation Intent Classification on ACL-ARC (using extra training data)
CITATION INTENT CLASSIFICATION CONVERSATIONAL RESPONSE SELECTION COREFERENCE RESOLUTION LANGUAGE MODELLING NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC ROLE LABELING SENTIMENT ANALYSIS
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.
Ranked #1 on Language Modelling on LAMBADA
COMMON SENSE REASONING COREFERENCE RESOLUTION DOMAIN ADAPTATION FEW-SHOT LEARNING LANGUAGE MODELLING MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SENTENCE COMPLETION UNSUPERVISED MACHINE TRANSLATION WORD SENSE DISAMBIGUATION
We introduce Stanza, an open-source Python natural language processing toolkit supporting 66 human languages.
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text.
Ranked #1 on Relation Extraction on Re-TACRED
We introduce a fully differentiable approximation to higher-order inference for coreference resolution.
Ranked #6 on Coreference Resolution on OntoNotes
We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector.
Ranked #10 on Coreference Resolution on CoNLL 2012
We apply BERT to coreference resolution, achieving strong improvements on the OntoNotes (+3. 9 F1) and GAP (+11. 5 F1) benchmarks.
Ranked #2 on Coreference Resolution on OntoNotes
Coreference resolution systems are typically trained with heuristic loss functions that require careful tuning.
Ranked #11 on Coreference Resolution on OntoNotes