On the Performance of Hierarchical Distributed Correspondence Graphs for Efficient Symbol Grounding of Robot Instructions

Natural language interfaces are powerful tools that enables humans and robots to convey information without the need for extensive training or complex graphical interfaces. Statistical techniques that employ probabilistic graphical mod- els have proven effective at interpreting symbols that represent commands and observations for robot direction-following and object manipulation. A limitation of these approaches is their inefficiency in dealing with larger and more complex symbolic representations. Herein, we present a model for language understanding that uses parse trees and environment models both to learn the structure of probabilistic graphical models and to perform inference over this learned structure for symbol grounding. This model, called the Hierarchical Distributed Correspondence Graph (HDCG), exploits information about symbols that are expressed in the corpus to construct min- imalist graphical models that are more efficient to search. In a series of comparative experiments, we demonstrate a significant improvement in efficiency without loss in accuracy over contemporary approaches for human-robot interaction.

PDF

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here