Search Results for author: Vihan Jain

Found 15 papers, 8 papers with code

Wide & Deep Learning for Recommender Systems

36 code implementations24 Jun 2016 Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, Hemal Shah

Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort.

Click-Through Rate Prediction Feature Engineering +3

Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation

no code implementations ACL 2019 Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, Jason Baldridge

We also show that the existing paths in the dataset are not ideal for evaluating instruction following because they are direct-to-goal shortest paths.

Instruction Following Vision and Language Navigation

Multi-modal Discriminative Model for Vision-and-Language Navigation

no code implementations WS 2019 Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation (VLN) is a natural language grounding task where agents have to interpret natural language instructions in the context of visual scenes in a dynamic environment to achieve prescribed navigation goals.

Vision and Language Navigation

General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping

1 code implementation11 Jul 2019 Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, Jason Baldridge

We address fundamental flaws in previously used metrics and show how Dynamic Time Warping (DTW), a long known method of measuring similarity between two time series, can be used for evaluation of navigation agents.

Dynamic Time Warping Navigate +2

Transferable Representation Learning in Vision-and-Language Navigation

no code implementations ICCV 2019 Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals.

Representation Learning Vision and Language Navigation

RecSim: A Configurable Simulation Platform for Recommender Systems

1 code implementation11 Sep 2019 Eugene Ie, Chih-Wei Hsu, Martin Mladenov, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu, Craig Boutilier

We propose RecSim, a configurable platform for authoring simulation environments for recommender systems (RSs) that naturally supports sequential interaction with users.

Recommendation Systems reinforcement-learning +1

Generalized Natural Language Grounded Navigation via Environment-agnostic Multitask Learning

no code implementations25 Sep 2019 Xin Wang, Vihan Jain, Eugene Ie, William Wang, Zornitsa Kozareva, Sujith Ravi

Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e. g., following natural language instructions or dialog.

Vision-Language Navigation

VALAN: Vision and Language Agent Navigation

1 code implementation6 Dec 2019 Larry Lansing, Vihan Jain, Harsh Mehta, Haoshuo Huang, Eugene Ie

VALAN is a lightweight and scalable software framework for deep reinforcement learning based on the SEED RL architecture.

reinforcement-learning Reinforcement Learning (RL) +1

Environment-agnostic Multitask Learning for Natural Language Grounded Navigation

1 code implementation ECCV 2020 Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi

Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e. g., following natural language instructions or dialog.

Vision-Language Navigation

BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby Steps

1 code implementation ACL 2020 Wang Zhu, Hexiang Hu, Jiacheng Chen, Zhiwei Deng, Vihan Jain, Eugene Ie, Fei Sha

To this end, we propose BabyWalk, a new VLN agent that is learned to navigate by decomposing long instructions into shorter ones (BabySteps) and completing them sequentially.

Imitation Learning Navigate +1

Learning to Represent Image and Text with Denotation Graph

no code implementations EMNLP 2020 BoWen Zhang, Hexiang Hu, Vihan Jain, Eugene Ie, Fei Sha

Recent progresses have leveraged the ideas of pre-training (from language modeling) and attention layers in Transformers to learn representation from datasets containing images aligned with linguistic expressions that describe the images.

Attribute Image Retrieval +4

A Hierarchical Multi-Modal Encoder for Moment Localization in Video Corpus

no code implementations18 Nov 2020 BoWen Zhang, Hexiang Hu, Joonseok Lee, Ming Zhao, Sheide Chammas, Vihan Jain, Eugene Ie, Fei Sha

Identifying a short segment in a long video that semantically matches a text query is a challenging task that has important application potentials in language-based video search, browsing, and navigation.

Language Modelling Masked Language Modeling +3

On the Evaluation of Vision-and-Language Navigation Instructions

no code implementations EACL 2021 Ming Zhao, Peter Anderson, Vihan Jain, Su Wang, Alexander Ku, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions.

Vision and Language Navigation

RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems

1 code implementation14 Mar 2021 Martin Mladenov, Chih-Wei Hsu, Vihan Jain, Eugene Ie, Christopher Colby, Nicolas Mayoraz, Hubert Pham, Dustin Tran, Ivan Vendrov, Craig Boutilier

The development of recommender systems that optimize multi-turn interaction with users, and model the interactions of different agents (e. g., users, content providers, vendors) in the recommender ecosystem have drawn increasing attention in recent years.

counterfactual Probabilistic Programming +1

Cannot find the paper you are looking for? You can Submit a new open access paper.