Neural Event Extraction from Movies Description

WS 2018  ·  Alex Tozzo, Dejan Jovanovi{\'c}, Mohamed Amer ·

We present a novel approach for event extraction and abstraction from movie descriptions. Our event frame consists of {``}who{''}, {``}did what{''} {``}to whom{''}, {``}where{''}, and {``}when{''}. We formulate our problem using a recurrent neural network, enhanced with structural features extracted from syntactic parser, and trained using curriculum learning by progressively increasing the difficulty of the sentences. Our model serves as an intermediate step towards question answering systems, visual storytelling, and story completion tasks. We evaluate our approach on MovieQA dataset.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here