Semantic Vector Spaces for Broadening Consideration of Consequences

23 Feb 2018  ·  Douglas Summers Stay ·

Reasoning systems with too simple a model of the world and human intent are unable to consider potential negative side effects of their actions and modify their plans to avoid them (e.g., avoiding potential errors). However, hand-encoding the enormous and subtle body of facts that constitutes common sense into a knowledge base has proved too difficult despite decades of work. Distributed semantic vector spaces learned from large text corpora, on the other hand, can learn representations that capture shades of meaning of common-sense concepts and perform analogical and associational reasoning in ways that knowledge bases are too rigid to perform, by encoding concepts and the relations between them as geometric structures. These have, however, the disadvantage of being unreliable, poorly understood, and biased in their view of the world by the source material. This chapter will discuss how these approaches may be combined in a way that combines the best properties of each for understanding the world and human intentions in a richer way.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here