Named entity recognition (NER) is the task of tagging entities in text with their corresponding type. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.
( Image credit: Zalando )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
SOTA for Common Sense Reasoning on SWAG
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
SOTA for CCG Supertagging on CCGBank
We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference.
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
#2 best model for Sentiment Analysis on SST-5 Fine-grained classification
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.
#20 best model for Named Entity Recognition on CoNLL 2003 (English)
State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available.
#32 best model for Named Entity Recognition on CoNLL 2003 (English)
Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance.
We make all code and pre-trained models available to the research community for use and reproduction.
#6 best model for Named Entity Recognition on CoNLL 2003 (English) (using extra training data)
Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.
SOTA for Chunking on Penn Treebank