Situated Multimodal Control of a Mobile Robot: Navigation through a Virtual Environment

We present a new interface for controlling a navigation robot in novel environments using coordinated gesture and language. We use a TurtleBot3 robot with a LIDAR and a camera, an embodied simulation of what the robot has encountered while exploring, and a cross-platform bridge facilitating generic communication... A human partner can deliver instructions to the robot using spoken English and gestures relative to the simulated environment, to guide the robot through navigation tasks. read more

Results in Papers With Code
(↓ scroll down to see all results)