270 papers with code • 1 benchmarks • 2 datasets
|Trend||Dataset||Best Model||Paper Title||Paper||Code||Compare|
We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale.
Current state-of-the-art approaches for named entity recognition (NER) typically consider text at the sentence-level and thus do not model information that crosses sentence boundaries.
Summary: Named Entity Recognition (NER) is an important step in biomedical information extraction pipelines.
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.
Ranked #38 on Named Entity Recognition on CoNLL 2003 (English)
State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available.
Ranked #6 on Named Entity Recognition on CoNLL++
We make all code and pre-trained models available to the research community for use and reproduction.
The Bidirectional long short-term memory networks (BiLSTM) have been widely used as an encoder in models solving the named entity recognition (NER) task.
Ranked #8 on Chinese Named Entity Recognition on Resume NER
State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing.
Ranked #5 on Named Entity Recognition on CoNLL++
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.