Sentiment Classification
264 papers with code • 1 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Sentiment Classification
Trend | Dataset | Best Model | Paper | Code | Compare |
---|
Libraries
Use these libraries to find Sentiment Classification models and implementationsMost implemented papers
A Structured Self-attentive Sentence Embedding
This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention.
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks.
A C-LSTM Neural Network for Text Classification
In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification.
NEZHA: Neural Contextualized Representation for Chinese Language Understanding
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
Effective LSTMs for Target-Dependent Sentiment Classification
Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence.
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences.
Aspect Level Sentiment Classification with Deep Memory Network
Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory.
Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders
We present Mockingjay as a new speech representation learning approach, where bidirectional Transformer encoders are pre-trained on a large amount of unlabeled speech.
Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis
In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE).