Story Generation with Commonsense Knowledge Graphs and Axioms

Humans can understand stories, and the rich interactions between agents, locations, and events, seamlessly. However, state-of-the-art reasoning models struggle with understanding, completing, or explaining stories, often due to the complexity of the underlying common sense necessary for comprehension. One potential reason models perform poorly is the lack of large-scale training data that supplies annotations of the common sense necessary for story understanding. In this paper, we investigate the generation of stories at scale, by combining commonsense axioms with commonsense knowledge graphs to produce stories annotated with common sense. We first demonstrate that commonsense axioms and commonsense knowledge graphs are sufficient to capture the underlying narratives in a popular story corpus. Our method aligns story types with commonsense axioms, and queries to a commonsense knowledge graph, enabling the generation of hundreds of thousands of stories. We evaluate these stories for sensibility and interestingness through a crowdsourcing task. Using our story corpus, we also design a probing task with questions for three exemplar story types. Our results show that our method generates story endings of a higher quality compared to the current generative language models. This work points to key open challenges around generating better stories, providing more comprehensive explanations, and building models that can explain any story with axioms.

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here