Search Results for author: Eugene Ie

Found 24 papers, 11 papers with code

RecSim: A Configurable Simulation Platform for Recommender Systems

1 code implementation11 Sep 2019 Eugene Ie, Chih-Wei Hsu, Martin Mladenov, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu, Craig Boutilier

We propose RecSim, a configurable platform for authoring simulation environments for recommender systems (RSs) that naturally supports sequential interaction with users.

Recommendation Systems reinforcement-learning +1

RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems

1 code implementation14 Mar 2021 Martin Mladenov, Chih-Wei Hsu, Vihan Jain, Eugene Ie, Christopher Colby, Nicolas Mayoraz, Hubert Pham, Dustin Tran, Ivan Vendrov, Craig Boutilier

The development of recommender systems that optimize multi-turn interaction with users, and model the interactions of different agents (e. g., users, content providers, vendors) in the recommender ecosystem have drawn increasing attention in recent years.

counterfactual Probabilistic Programming +1

General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping

1 code implementation11 Jul 2019 Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, Jason Baldridge

We address fundamental flaws in previously used metrics and show how Dynamic Time Warping (DTW), a long known method of measuring similarity between two time series, can be used for evaluation of navigation agents.

Dynamic Time Warping Navigate +2

VALAN: Vision and Language Agent Navigation

1 code implementation6 Dec 2019 Larry Lansing, Vihan Jain, Harsh Mehta, Haoshuo Huang, Eugene Ie

VALAN is a lightweight and scalable software framework for deep reinforcement learning based on the SEED RL architecture.

reinforcement-learning Reinforcement Learning (RL) +1

Environment-agnostic Multitask Learning for Natural Language Grounded Navigation

1 code implementation ECCV 2020 Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi

Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e. g., following natural language instructions or dialog.

Vision-Language Navigation

BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby Steps

1 code implementation ACL 2020 Wang Zhu, Hexiang Hu, Jiacheng Chen, Zhiwei Deng, Vihan Jain, Eugene Ie, Fei Sha

To this end, we propose BabyWalk, a new VLN agent that is learned to navigate by decomposing long instructions into shorter ones (BabySteps) and completing them sequentially.

Imitation Learning Navigate +1

Spatial Language Representation with Multi-Level Geocoding

1 code implementation21 Aug 2020 Sayali Kulkarni, Shailee Jain, Mohammad Javad Hosseini, Jason Baldridge, Eugene Ie, Li Zhang

We present a multi-level geocoding model (MLG) that learns to associate texts to geographic locations.

Toponym Resolution

Using Web Co-occurrence Statistics for Improving Image Categorization

no code implementations19 Dec 2013 Samy Bengio, Jeff Dean, Dumitru Erhan, Eugene Ie, Quoc Le, Andrew Rabinovich, Jonathon Shlens, Yoram Singer

Albeit the simplicity of the resulting optimization problem, it is effective in improving both recognition and localization accuracy.

Common Sense Reasoning Image Categorization +1

Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation

no code implementations ACL 2019 Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, Jason Baldridge

We also show that the existing paths in the dataset are not ideal for evaluating instruction following because they are direct-to-goal shortest paths.

Instruction Following Vision and Language Navigation

Multi-modal Discriminative Model for Vision-and-Language Navigation

no code implementations WS 2019 Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation (VLN) is a natural language grounding task where agents have to interpret natural language instructions in the context of visual scenes in a dynamic environment to achieve prescribed navigation goals.

Vision and Language Navigation

Transferable Representation Learning in Vision-and-Language Navigation

no code implementations ICCV 2019 Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals.

Representation Learning Vision and Language Navigation

Learning Dense Representations for Entity Retrieval

no code implementations CONLL 2019 Daniel Gillick, Sayali Kulkarni, Larry Lansing, Alessandro Presta, Jason Baldridge, Eugene Ie, Diego Garcia-Olano

We show that it is feasible to perform entity linking by training a dual encoder (two-tower) model that encodes mentions and entities in the same dense vector space, where candidate entities are retrieved by approximate nearest neighbor search.

Entity Linking Entity Retrieval +1

Mean-Field Approximation to Gaussian-Softmax Integral with Application to Uncertainty Estimation

no code implementations13 Jun 2020 Zhiyun Lu, Eugene Ie, Fei Sha

Many methods have been proposed to quantify the predictive uncertainty associated with the outputs of deep neural networks.

Out-of-Distribution Detection

Learning to Represent Image and Text with Denotation Graph

no code implementations EMNLP 2020 BoWen Zhang, Hexiang Hu, Vihan Jain, Eugene Ie, Fei Sha

Recent progresses have leveraged the ideas of pre-training (from language modeling) and attention layers in Transformers to learn representation from datasets containing images aligned with linguistic expressions that describe the images.

Attribute Image Retrieval +4

A Hierarchical Multi-Modal Encoder for Moment Localization in Video Corpus

no code implementations18 Nov 2020 BoWen Zhang, Hexiang Hu, Joonseok Lee, Ming Zhao, Sheide Chammas, Vihan Jain, Eugene Ie, Fei Sha

Identifying a short segment in a long video that semantically matches a text query is a challenging task that has important application potentials in language-based video search, browsing, and navigation.

Language Modelling Masked Language Modeling +3

On the Evaluation of Vision-and-Language Navigation Instructions

no code implementations EACL 2021 Ming Zhao, Peter Anderson, Vihan Jain, Su Wang, Alexander Ku, Jason Baldridge, Eugene Ie

Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions.

Vision and Language Navigation

Generalized Natural Language Grounded Navigation via Environment-agnostic Multitask Learning

no code implementations25 Sep 2019 Xin Wang, Vihan Jain, Eugene Ie, William Wang, Zornitsa Kozareva, Sujith Ravi

Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e. g., following natural language instructions or dialog.

Vision-Language Navigation

Pedestrian Crossing Action Recognition and Trajectory Prediction with 3D Human Keypoints

no code implementations1 Jun 2023 Jiachen Li, Xinwei Shi, Feiyu Chen, Jonathan Stroud, Zhishuai Zhang, Tian Lan, Junhua Mao, Jeonhyung Kang, Khaled S. Refaat, Weilong Yang, Eugene Ie, CongCong Li

Accurate understanding and prediction of human behaviors are critical prerequisites for autonomous vehicles, especially in highly dynamic and interactive scenarios such as intersections in dense urban areas.

Action Recognition Autonomous Vehicles +3

Cannot find the paper you are looking for? You can Submit a new open access paper.