Search Results for author: Dhruv Shah

Found 23 papers, 3 papers with code

SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation

no code implementations1 Mar 2024 Noriaki Hirose, Dhruv Shah, Kyle Stachowicz, Ajay Sridhar, Sergey Levine

Specifically, SELFI stabilizes the online learning process by incorporating the same model-based learning objective from offline pre-training into the Q-values learned with online model-free reinforcement learning.

Collision Avoidance reinforcement-learning +2

An Optimistic-Robust Approach for Dynamic Positioning of Omnichannel Inventories

no code implementations17 Oct 2023 Pavithra Harsha, Shivaram Subramanian, Ali Koc, Mahesh Ramakrishna, Brian Quanz, Dhruv Shah, Chandra Narayanaswami

Using a real-world dataset from a large American omnichannel retail chain, a business value assessment during a peak period indicates over a 15% profitability gain for BIO over RO and other baselines while also preserving the (practical) worst case performance.

Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning

no code implementations16 Oct 2023 Dhruv Shah, Michael Equi, Blazej Osinski, Fei Xia, Brian Ichter, Sergey Levine

Navigation in unfamiliar environments presents a major challenge for robots: while mapping and planning techniques can be used to build up a representation of the world, quickly discovering a path to a desired goal in unfamiliar settings with such methods often requires lengthy mapping and exploration.

Language Modelling Navigate

NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration

no code implementations11 Oct 2023 Ajay Sridhar, Dhruv Shah, Catherine Glossop, Sergey Levine

In this paper, we describe how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration, with the latter providing the ability to search novel environments, and the former providing the ability to reach a user-specified goal once it has been located.

ViNT: A Foundation Model for Visual Navigation

no code implementations26 Jun 2023 Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine

In this paper, we describe the Visual Navigation Transformer (ViNT), a foundation model that aims to bring the success of general-purpose pre-trained models to vision-based robotic navigation.

Visual Navigation

SACSoN: Scalable Autonomous Control for Social Navigation

no code implementations2 Jun 2023 Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine

By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.

Continual Learning counterfactual +3

Machine Vision Using Cellphone Camera: A Comparison of deep networks for classifying three challenging denominations of Indian Coins

no code implementations12 May 2023 Keyur D. Joshi, Dhruv Shah, Varshil Shah, Nilay Gandhi, Sanket J. Shah, Sanket B. Shah

Therefore, it was hypothesized that a digital image of a coin resting on its either size could be classified into its correct denomination by training a deep neural network model.

FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing

no code implementations19 Apr 2023 Kyle Stachowicz, Dhruv Shah, Arjun Bhorkar, Ilya Kostrikov, Sergey Levine

We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL).

Reinforcement Learning (RL)

Grounded Decoding: Guiding Text Generation with Grounded Models for Embodied Agents

no code implementations NeurIPS 2023 Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, Brian Ichter

Recent progress in large language models (LLMs) has demonstrated the ability to learn and leverage Internet-scale knowledge through pre-training with autoregressive models.

Language Modelling Text Generation

Offline Reinforcement Learning for Visual Navigation

1 code implementation16 Dec 2022 Dhruv Shah, Arjun Bhorkar, Hrish Leen, Ilya Kostrikov, Nick Rhinehart, Sergey Levine

Reinforcement learning can enable robots to navigate to distant goals while optimizing user-specified reward functions, including preferences for following lanes, staying on paved paths, or avoiding freshly mowed grass.

Navigate Offline RL +3

Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results

no code implementations13 Dec 2022 Sergey Levine, Dhruv Shah

Navigation is one of the most heavily studied problems in robotics, and is conventionally approached as a geometric mapping and planning problem.

GNM: A General Navigation Model to Drive Any Robot

1 code implementation7 Oct 2022 Dhruv Shah, Ajay Sridhar, Arjun Bhorkar, Noriaki Hirose, Sergey Levine

Learning provides a powerful tool for vision-based navigation, but the capabilities of learning-based policies are constrained by limited training data.

LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action

1 code implementation10 Jul 2022 Dhruv Shah, Blazej Osinski, Brian Ichter, Sergey Levine

Goal-conditioned policies for robotic navigation can be trained on large, unannotated datasets, providing for good generalization to real-world settings.

Instruction Following Language Modelling

ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints

no code implementations23 Feb 2022 Dhruv Shah, Sergey Levine

In this work, we propose an approach that integrates learning and planning, and can utilize side information such as schematic roadmaps, satellite maps and GPS coordinates as a planning heuristic, without relying on them being accurate.

3D Reconstruction General Knowledge +1

Rapid Exploration for Open-World Navigation with Latent Goal Models

no code implementations12 Apr 2021 Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.

Autonomous Navigation

ViNG: Learning Open-World Navigation with Visual Goals

no code implementations17 Dec 2020 Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform.

Navigate reinforcement-learning +1

Effect Of Weather Conditions On FSO Link

no code implementations17 Sep 2020 Saumya Borwankar, Dhruv Shah

Free Space Optics (FSO) is a developing technology for Line of Sight communication that uses light propagation in free space that provides various advantages like high bandwidth, high data rate, ease of installation, free licensing and secure communication.

The Ingredients of Real World Robotic Reinforcement Learning

no code implementations ICLR 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.

reinforcement-learning Reinforcement Learning (RL)

The Ingredients of Real-World Robotic Reinforcement Learning

no code implementations27 Apr 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

In this work, we discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.

reinforcement-learning Reinforcement Learning (RL)

Machine learning based co-creative design framework

no code implementations23 Jan 2020 Brian Quanz, Wei Sun, Ajay Deshpande, Dhruv Shah, Jae-Eun Park

We propose a flexible, co-creative framework bringing together multiple machine learning techniques to assist human users to efficiently produce effective creative designs.

BIG-bench Machine Learning

Toward A Neuro-inspired Creative Decoder

no code implementations6 Feb 2019 Payel Das, Brian Quanz, Pin-Yu Chen, Jae-wook Ahn, Dhruv Shah

Creativity, a process that generates novel and meaningful ideas, involves increased association between task-positive (control) and task-negative (default) networks in the human brain.

Cannot find the paper you are looking for? You can Submit a new open access paper.