Learning abstract perceptual notions: the example of space

24 Jul 2019  ·  Alexander V. Terekhov, J. Kevin O'Regan ·

Humans are extremely swift learners. We are able to grasp highly abstract notions, whether they come from art perception or pure mathematics. Current machine learning techniques demonstrate astonishing results in extracting patterns in information. Yet the abstract notions we possess are more than just statistical patterns in the incoming information. Sensorimotor theory suggests that they represent functions, laws, describing how the information can be transformed, or, in other words, they represent the statistics of sensorimotor changes rather than sensory inputs themselves. The aim of our work is to suggest a way for machine learning and sensorimotor theory to benefit from each other so as to pave the way toward new horizons in learning. We show in this study that a highly abstract notion, that of space, can be seen as a collection of laws of transformations of sensory information and that these laws could in theory be learned by a naive agent. As an illustration we do a one-dimensional simulation in which an agent extracts spatial knowledge in the form of internalized ("sensible") rigid displacements. The agent uses them to encode its own displacements in a way which is isometrically related to external space. Though the algorithm allowing acquisition of rigid displacements is designed \emph{ad hoc}, we believe it can stimulate the development of unsupervised learning techniques leading to similar results.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here