Search Results for author: Jacob Krantz

Found 12 papers, 5 papers with code

Instance-Specific Image Goal Navigation: Training Embodied Agents to Find Object Instances

no code implementations29 Nov 2022 Jacob Krantz, Stefan Lee, Jitendra Malik, Dhruv Batra, Devendra Singh Chaplot

We consider the problem of embodied visual navigation given an image-goal (ImageNav) where an agent is initialized in an unfamiliar environment and tasked with navigating to a location 'described' by an image.

Visual Navigation

Iterative Vision-and-Language Navigation

no code implementations CVPR 2023 Jacob Krantz, Shurjo Banerjee, Wang Zhu, Jason Corso, Peter Anderson, Stefan Lee, Jesse Thomason

We present Iterative Vision-and-Language Navigation (IVLN), a paradigm for evaluating language-guided agents navigating in a persistent environment over time.

Instruction Following Vision and Language Navigation

Sim-2-Sim Transfer for Vision-and-Language Navigation in Continuous Environments

no code implementations20 Apr 2022 Jacob Krantz, Stefan Lee

Recent work in Vision-and-Language Navigation (VLN) has presented two environmental paradigms with differing realism -- the standard VLN setting built on topological environments where navigation is abstracted away, and the VLN-CE setting where agents must navigate continuous 3D environments using low-level actions.

Navigate Vision and Language Navigation

Waypoint Models for Instruction-guided Navigation in Continuous Environments

1 code implementation ICCV 2021 Jacob Krantz, Aaron Gokaslan, Dhruv Batra, Stefan Lee, Oleksandr Maksymets

Little inquiry has explicitly addressed the role of action spaces in language-guided visual navigation -- either in terms of its effect on navigation success or the efficiency with which a robotic agent could execute the resulting trajectory.

Instruction Following Visual Navigation

Where Are You? Localization from Embodied Dialog

2 code implementations EMNLP 2020 Meera Hahn, Jacob Krantz, Dhruv Batra, Devi Parikh, James M. Rehg, Stefan Lee, Peter Anderson

In this paper, we focus on the LED task -- providing a strong baseline model with detailed ablations characterizing both dataset biases and the importance of various modeling choices.

Navigate Visual Dialog

Beyond the Nav-Graph: Vision-and-Language Navigation in Continuous Environments – Extended Abstract

no code implementations ICML Workshop LaReL 2020 Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, Stefan Lee

We develop a language-guided navigation task set in a continuous 3D environment where agents must execute low-level actions to follow natural language navigation directions.

Vision and Language Navigation

Beyond the Nav-Graph: Vision-and-Language Navigation in Continuous Environments

3 code implementations ECCV 2020 Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, Stefan Lee

We develop a language-guided navigation task set in a continuous 3D environment where agents must execute low-level actions to follow natural language navigation directions.

Vision and Language Navigation

Language-Agnostic Syllabification with Neural Sequence Labeling

1 code implementation29 Sep 2019 Jacob Krantz, Maxwell Dulin, Paul De Palma

The concept of the syllable is cross-linguistic, though formal definitions are rarely agreed upon, even within a language.

Chunking named-entity-recognition +8

Abstractive Summarization Using Attentive Neural Techniques

2 code implementations20 Oct 2018 Jacob Krantz, Jugal Kalita

However, we show that these metrics are limited in their ability to effectively score abstractive summaries, and propose a new approach based on the intuition that an abstractive model requires an abstractive evaluation.

Abstractive Text Summarization Machine Translation +2

Syllabification by Phone Categorization

no code implementations15 Jul 2018 Jacob Krantz, Maxwell Dulin, Paul De Palma, Mark VanDam

Syllables play an important role in speech synthesis, speech recognition, and spoken document retrieval.

Retrieval speech-recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.