Open Information Extraction

Last updated on Mar 15, 2021

Open Information Extraction

Parameters 15 Million
Encoder Layers 8
File Size 51.68 MB
Training Data AW-OIE
Training Resources
Training Time

Training Techniques AdaDelta
Architecture Dropout, Linear Layer
Epochs 500
Batch Size 80
Encoder Type alternating_lstm
Encoder Layers 8
Encoder Input Size 200
Encoder Hidden Size 300
SHOW MORE
SHOW LESS
README.md

Summary

A reimplementation of a deep BiLSTM sequence prediction model (Stanovsky et al., 2018).

Explore live Open Information Extraction demo at AllenNLP.

How do I load this model?

from allennlp_models.pretrained import load_predictor
predictor = load_predictor("structured-prediction-srl")

Getting predictions

sentence = "John broke the window with a rock."
preds = predictor.predict(sentence)
print(preds["verbs"][0]["description"])
# prints:
# [ARG0: John] [V: broke] [ARG1: the window] [ARGM-MNR: with a rock] .

You can also get predictions using allennlp command line interface:

echo '{"sentence": "John broke the window with a rock."}' | \
    allennlp predict https://storage.googleapis.com/allennlp-public-models/openie-model.2020.03.26.tar.gz -

How do I train this model?

To train this model you can use allennlp CLI tool and the configuration file srl.jsonnet:

allennlp train srl.jsonnet -s output_dir

See the AllenNLP Training and prediction guide for more details.

Citation

@inproceedings{Stanovsky2018SupervisedOI,
 author = {Gabriel Stanovsky and Julian Michael and Luke Zettlemoyer and I. Dagan},
 booktitle = {NAACL-HLT},
 title = {Supervised Open Information Extraction},
 year = {2018}
}