Mapping Instructions and Visual Observations to Actions with Reinforcement Learning

We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input... (read more)

Results in Papers With Code
(↓ scroll down to see all results)