Search Results for author: Karmesh Yadav

Found 13 papers, 6 papers with code

Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control

1 code implementation9 May 2024 Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, Tim G. J. Rudner

This has led to the emergence of pre-trained vision-language models as a tool for transferring representations learned from internet-scale data to downstream tasks and new domains.

Representation Learning Scene Understanding

OVRL-V2: A simple state-of-art baseline for ImageNav and ObjectNav

no code implementations14 Mar 2023 Karmesh Yadav, Arjun Majumdar, Ram Ramrakhya, Naoki Yokoyama, Alexei Baevski, Zsolt Kira, Oleksandr Maksymets, Dhruv Batra

We present a single neural network architecture composed of task-agnostic components (ViTs, convolutions, and LSTMs) that achieves state-of-art results on both the ImageNav ("go to location in <this picture>") and ObjectNav ("find a chair") tasks without any task-specific modules like object detection, segmentation, mapping, or planning modules.

object-detection Object Detection +3

Last-Mile Embodied Visual Navigation

1 code implementation21 Nov 2022 Justin Wasserman, Karmesh Yadav, Girish Chowdhary, Abhinav Gupta, Unnat Jain

Realistic long-horizon tasks like image-goal navigation involve exploratory and exploitative phases.

Visual Navigation

Offline Visual Representation Learning for Embodied Navigation

1 code implementation27 Apr 2022 Karmesh Yadav, Ram Ramrakhya, Arjun Majumdar, Vincent-Pierre Berges, Sachit Kuhar, Dhruv Batra, Alexei Baevski, Oleksandr Maksymets

In this paper, we show that an alternative 2-stage strategy is far more effective: (1) offline pretraining of visual representations with self-supervised learning (SSL) using large-scale pre-rendered images of indoor environments (Omnidata), and (2) online finetuning of visuomotor representations on specific tasks with image augmentations under long learning schedules.

Representation Learning Self-Supervised Learning

Look-ahead Meta Learning for Continual Learning

2 code implementations NeurIPS 2020 Gunshi Gupta, Karmesh Yadav, Liam Paull

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks.

Continual Learning Meta-Learning

La-MAML: Look-ahead Meta Learning for Continual Learning

3 code implementations ICML Workshop LifelongML 2020 Gunshi Gupta, Karmesh Yadav, Liam Paull

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks.

Continual Learning Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.