Search Results for author: Haoshuo Huang

Found 4 papers, 1 papers with code

Multi-modal Discriminative Model for Vision-and-Language Navigation

no code implementations WS 2019 Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation (VLN) is a natural language grounding task where agents have to interpret natural language instructions in the context of visual scenes in a dynamic environment to achieve prescribed navigation goals.

Vision and Language Navigation

Transferable Representation Learning in Vision-and-Language Navigation

no code implementations ICCV 2019 Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals.

Representation Learning Vision and Language Navigation

VALAN: Vision and Language Agent Navigation

1 code implementation6 Dec 2019 Larry Lansing, Vihan Jain, Harsh Mehta, Haoshuo Huang, Eugene Ie

VALAN is a lightweight and scalable software framework for deep reinforcement learning based on the SEED RL architecture.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.