Language guided machine action

23 Nov 2020  ·  Feng Qi ·

Here we build a hierarchical modular network called Language guided machine action (LGMA), whose modules process information stream mimicking human cortical network that allows to achieve multiple general tasks such as language guided action, intention decomposition and mental simulation before action execution etc. LGMA contains 3 main systems: (1) primary sensory system that multimodal sensory information of vision, language and sensorimotor. (2) association system involves and Broca modules to comprehend and synthesize language, BA14/40 module to translate between sensorimotor and language, midTemporal module to convert between language and vision, and superior parietal lobe to integrate attended visual object and arm state into cognitive map for future spatial actions. Pre-supplementary motor area (pre-SMA) can converts high level intention into sequential atomic actions, while SMA can integrate these atomic actions, current arm and attended object state into sensorimotor vector to apply corresponding torques on arm via pre-motor and primary motor of arm to achieve the intention. The high-level executive system contains PFC that does explicit inference and guide voluntary action based on language, while BG is the habitual action control center.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods