Sentiment analysis is the task of classifying the polarity of a given text.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Ranked #1 on Question Answering on CoQA
COMMON SENSE REASONING CONVERSATIONAL RESPONSE SELECTION CROSS-LINGUAL NATURAL LANGUAGE INFERENCE NAMED ENTITY RECOGNITION NATURAL LANGUAGE UNDERSTANDING QUESTION ANSWERING SENTENCE CLASSIFICATION SENTIMENT ANALYSIS
Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.
Ranked #12 on Sentiment Analysis on IMDb
Convolutional neural network (CNN) is a neural network that can make use of the internal structure of data such as the 2D structure of image data.
Ranked #15 on Sentiment Analysis on IMDb
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.
Ranked #1 on Question Answering on SQuAD1.1 dev
COMMON SENSE REASONING COREFERENCE RESOLUTION LINGUISTIC ACCEPTABILITY NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE NATURAL LANGUAGE UNDERSTANDING QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS WORD SENSE DISAMBIGUATION
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.
Ranked #8 on Semantic Textual Similarity on MRPC
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Ranked #2 on Common Sense Reasoning on SWAG
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.
Humans read and write hundreds of billions of messages every day.
Ranked #15 on Natural Language Inference on RTE
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.