GLoVe-LSTM

Last updated on Mar 15, 2021

GLoVe-LSTM

Parameters 10 Million
File Size 35.32 MB
Training Data SST

Training Techniques Adam
Architecture LSTM, Linear Layer
LR 0.001
Epochs 5
Batch Size 32
SHOW MORE
SHOW LESS
README.md

Summary

This model uses GloVe embeddings and is trained on the binary classification setting of the Stanford Sentiment Treebank. It achieves about 87% on the test set.

Explore live Sentiment Analysis demo at AllenNLP.

How do I load this model?

from allennlp_models.pretrained import load_predictor
predictor = load_predictor("glove-sst")

Getting predictions

sentence = "This film doesn't care about cleverness, wit or any other kind of intelligent humor."
preds = predictor.predict(sentence)
print(f"p(positive)={preds['probs'][0]:.2%}")
# prints: p(positive)=15.60%

You can also get predictions using allennlp command line interface:

echo '{"sentence": "This film doesn'\''t care about cleverness, wit or any other kind of intelligent humor."}' | \
    allennlp predict https://storage.googleapis.com/allennlp-public-models/basic_stanford_sentiment_treebank-2020.06.09.tar.gz -

How do I evaluate this model?

To evaluate the model on Stanford Sentiment Treebank run:

allennlp evaluate https://storage.googleapis.com/allennlp-public-models/basic_stanford_sentiment_treebank-2020.06.09.tar.gz \
    https://allennlp.s3.amazonaws.com/datasets/sst/test.txt

How do I train this model?

To train this model you can use allennlp CLI tool and the configuration file basic_stanford_sentiment_treebank.jsonnet:

allennlp train basic_stanford_sentiment_treebank.jsonnet -s output_dir

See the AllenNLP Training and prediction guide for more details.

Results

Sentiment Analysis on SST-2 Binary classification

Sentiment Analysis
BENCHMARK MODEL METRIC NAME METRIC VALUE GLOBAL RANK
SST-2 Binary classification GLoVe-LSTM Accuracy 87% # 2