Search Results for author: Vikash Kumar

Found 52 papers, 21 papers with code

A Game Theoretic Perspective on Model-Based Reinforcement Learning

no code implementations ICML 2020 Aravind Rajeswaran, Igor Mordatch, Vikash Kumar

We point out that a large class of MBRL algorithms can be viewed as a game between two players: (1) a policy player, which attempts to maximize rewards under the learned model; (2) a model player, which attempts to fit the real-world data collected by the policy player.

Continuous Control Model-based Reinforcement Learning +2

Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans

no code implementations1 Dec 2023 Homanga Bharadhwaj, Abhinav Gupta, Vikash Kumar, Shubham Tulsiani

We pursue the goal of developing robots that can interact zero-shot with generic unseen objects via a diverse repertoire of manipulation skills and show how passive human videos can serve as a rich source of data for learning such generalist robots.

Robot Manipulation Translation

REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation

no code implementations6 Sep 2023 Zheyuan Hu, Aaron Rovinsky, Jianlan Luo, Vikash Kumar, Abhishek Gupta, Sergey Levine

We demonstrate the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisition of intricate manipulation skills in the real world on a four-fingered robotic hand.

Imitation Learning Reinforcement Learning (RL)

MyoDex: A Generalizable Prior for Dexterous Manipulation

no code implementations6 Sep 2023 Vittorio Caggiano, Sudeep Dasari, Vikash Kumar

While prior work has synthesized single musculoskeletal control behaviors, MyoDex is the first generalizable manipulation prior that catalyzes the learning of dexterous physiological control across a large variety of contact-rich behaviors.

Multi-Task Learning

RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking

no code implementations5 Sep 2023 Homanga Bharadhwaj, Jay Vakil, Mohit Sharma, Abhinav Gupta, Shubham Tulsiani, Vikash Kumar

The grand aim of having a single robot that can manipulate arbitrary objects in diverse settings is at odds with the paucity of robotics datasets.

Chunking Robot Manipulation

Modified Lagrangian Formulation of Gear Tooth Crack Analysis using Combined Approach of Variable Mode Decomposition (VMD) and Time Synchronous Averaging (TSA)

no code implementations29 Aug 2023 Subrata Mukherjee, Vikash Kumar, Somnath Sarangi

For the first time, the integrated approach of variable mode decomposition (VMD) and time-synchronous averaging (TSA) has been presented to analyze the dynamic behaviour of CEMG systems at the different gear tooth cracks have been experienced as non-stationary and complex vibration signals with noise.

Integrated Approach of Gearbox Fault Diagnosis

no code implementations27 Aug 2023 Vikash Kumar, Subrata Mukherjee, Somnath Sarangi

Gearbox fault diagnosis is one of the most important parts in any industrial systems.

SAR: Generalization of Physiological Agility and Dexterity via Synergistic Action Representation

no code implementations7 Jul 2023 Cameron Berg, Vittorio Caggiano, Vikash Kumar

To the best of our knowledge, this investigation is the first of its kind to present an end-to-end pipeline for discovering synergies and using this representation to learn high-dimensional continuous control across a wide diversity of tasks.

Continuous Control

TorchRL: A data-driven decision-making library for PyTorch

2 code implementations1 Jun 2023 Albert Bou, Matteo Bettini, Sebastian Dittert, Vikash Kumar, Shagun Sodhani, Xiaomeng Yang, Gianni de Fabritiis, Vincent Moens

PyTorch has ascended as a premier machine learning framework, yet it lacks a native and comprehensive library for decision and control tasks suitable for large development teams dealing with complex real-world data and environments.

Computational Efficiency Decision Making +1

LIV: Language-Image Representations and Rewards for Robotic Control

1 code implementation1 Jun 2023 Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang, Osbert Bastani, Dinesh Jayaraman

We present Language-Image Value learning (LIV), a unified objective for vision-language representation and reward learning from action-free videos with text annotations.

Contrastive Learning Imitation Learning

Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware

no code implementations23 Apr 2023 Tony Z. Zhao, Vikash Kumar, Sergey Levine, Chelsea Finn

Fine manipulation tasks, such as threading cable ties or slotting a battery, are notoriously difficult for robots because they require precision, careful coordination of contact forces, and closed-loop visual feedback.

Chunking Imitation Learning

Zero-Shot Robot Manipulation from Passive Human Videos

no code implementations3 Feb 2023 Homanga Bharadhwaj, Abhinav Gupta, Shubham Tulsiani, Vikash Kumar

Can we learn robot manipulation for everyday tasks, only by watching videos of humans doing arbitrary tasks in different unstructured settings?

Robot Manipulation

Dexterous Manipulation from Images: Autonomous Real-World RL via Substep Guidance

no code implementations19 Dec 2022 Kelvin Xu, Zheyuan Hu, Ria Doshi, Aaron Rovinsky, Vikash Kumar, Abhishek Gupta, Sergey Levine

In this paper, we describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks and enable robots with complex multi-fingered hands to learn to perform them through interaction.

reinforcement-learning Reinforcement Learning (RL)

Cross-Domain Transfer via Semantic Skill Imitation

no code implementations14 Dec 2022 Karl Pertsch, Ruta Desai, Vikash Kumar, Franziska Meier, Joseph J. Lim, Dhruv Batra, Akshara Rai

We propose an approach for semantic imitation, which uses demonstrations from a source domain, e. g. human videos, to accelerate reinforcement learning (RL) in a different target domain, e. g. a robotic manipulator in a simulated kitchen.

Reinforcement Learning (RL) Robot Manipulation

MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

1 code implementation12 Dec 2022 Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran

We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework.

Model-based Reinforcement Learning reinforcement-learning +1

CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning

no code implementations12 Dec 2022 Zhao Mandi, Homanga Bharadhwaj, Vincent Moens, Shuran Song, Aravind Rajeswaran, Vikash Kumar

On a real robot setup, CACTI enables efficient training of a single policy that can perform 10 manipulation tasks involving kitchen objects, and is robust to varying layouts of distractors.

Data Augmentation Image Generation +3

Visual Dexterity: In-Hand Reorientation of Novel and Complex Object Shapes

1 code implementation21 Nov 2022 Tao Chen, Megha Tippur, Siyang Wu, Vikash Kumar, Edward Adelson, Pulkit Agrawal

The controller is trained using reinforcement learning in simulation and evaluated in the real world on new object shapes not used for training, including the most challenging scenario of reorienting objects held in the air by a downward-facing hand that must counteract gravity during reorientation.

Object

CoNMix for Source-free Single and Multi-target Domain Adaptation

1 code implementation7 Nov 2022 Vikash Kumar, Rohit Lal, Himanshu Patil, Anirban Chakraborty

The main motive of this work is to solve for Single and Multi target Domain Adaptation (SMTDA) for the source-free paradigm, which enforces a constraint where the labeled source data is not available during target adaptation due to various privacy-related restrictions on data sharing.

Domain Adaptation Knowledge Distillation +2

All the Feels: A dexterous hand with large-area tactile sensing

no code implementations27 Oct 2022 Raunaq Bhirangi, Abigail DeFranco, Jacob Adkins, Carmel Majidi, Abhinav Gupta, Tess Hellebrekers, Vikash Kumar

High cost and lack of reliability has precluded the widespread adoption of dexterous hands in robotics.

Real World Offline Reinforcement Learning with Realistic Data Source

no code implementations12 Oct 2022 Gaoyue Zhou, Liyiming Ke, Siddhartha Srinivasa, Abhinav Gupta, Aravind Rajeswaran, Vikash Kumar

Offline reinforcement learning (ORL) holds great promise for robot learning due to its ability to learn from arbitrary pre-generated experience.

Imitation Learning reinforcement-learning +1

VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training

1 code implementation30 Sep 2022 Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang

Given the inherent cost and scarcity of in-domain, task-specific robot data, learning from large, diverse, offline human videos has emerged as a promising path towards acquiring a generally useful visual representation for control; however, how these human videos can be used for general-purpose reward learning remains an open question.

Offline RL Open-Ended Question Answering +2

Learning Dexterous Manipulation from Exemplar Object Trajectories and Pre-Grasps

1 code implementation22 Sep 2022 Sudeep Dasari, Abhinav Gupta, Vikash Kumar

This paper seeks to escape these constraints, by developing a Pre-Grasp informed Dexterous Manipulation (PGDM) framework that generates diverse dexterous manipulation behaviors, without any task-specific reasoning or hyper-parameter tuning.

Efficient Exploration

MyoSuite -- A contact-rich simulation suite for musculoskeletal motor control

2 code implementations26 May 2022 Vittorio Caggiano, Huawei Wang, Guillaume Durandau, Massimo Sartori, Vikash Kumar

Current frameworks for musculoskeletal control do not support physiological sophistication of the musculoskeletal systems along with physical world interaction capabilities.

Continuous Control

R3M: A Universal Visual Representation for Robot Manipulation

1 code implementation23 Mar 2022 Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, Abhinav Gupta

We study how visual representations pre-trained on diverse human video data can enable data-efficient learning of downstream robotic manipulation tasks.

Contrastive Learning Robot Manipulation

Policy Architectures for Compositional Generalization in Control

no code implementations10 Mar 2022 Allan Zhou, Vikash Kumar, Chelsea Finn, Aravind Rajeswaran

Many tasks in control, robotics, and planning can be specified using desired goal configurations for various entities in the environment.

Imitation Learning Robot Manipulation

Translating Robot Skills: Learning Unsupervised Skill Correspondences Across Robots

no code implementations29 Sep 2021 Tanmay Shankar, Yixin Lin, Aravind Rajeswaran, Vikash Kumar, Stuart Anderson, Jean Oh

In this paper, we explore how we can endow robots with the ability to learn correspondences between their own skills, and those of morphologically different robots in different domains, in an entirely unsupervised manner.

Translation Unsupervised Machine Translation

Deep Neural Network Approach to Estimate Early Worst-Case Execution Time

no code implementations28 Jul 2021 Vikash Kumar

However, getting these results in the early stages of system development is an essential prerequisite for the system's dimensioning and configuration of the hardware setup.

The Ingredients of Real World Robotic Reinforcement Learning

no code implementations ICLR 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.

reinforcement-learning Reinforcement Learning (RL)

Dynamics-Aware Unsupervised Skill Discovery

1 code implementation ICLR 2020 Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment.

Model-based Reinforcement Learning

The Ingredients of Real-World Robotic Reinforcement Learning

no code implementations27 Apr 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

In this work, we discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.

reinforcement-learning Reinforcement Learning (RL)

Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning

2 code implementations27 Apr 2020 Archit Sharma, Michael Ahn, Sergey Levine, Vikash Kumar, Karol Hausman, Shixiang Gu

Can we instead develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks?

Model Predictive Control reinforcement-learning +2

A Game Theoretic Framework for Model Based Reinforcement Learning

no code implementations16 Apr 2020 Aravind Rajeswaran, Igor Mordatch, Vikash Kumar

Model-based reinforcement learning (MBRL) has recently gained immense interest due to its potential for sample efficiency and ability to incorporate off-policy data.

Model-based Reinforcement Learning reinforcement-learning +1

Benchmarking In-Hand Manipulation

no code implementations9 Jan 2020 Silvia Cruciani, Balakumar Sundaralingam, Kaiyu Hang, Vikash Kumar, Tucker Hermans, Danica Kragic

The purpose of this benchmark is to evaluate the planning and control aspects of robotic in-hand manipulation systems.

Robotics

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

1 code implementation25 Oct 2019 Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman

We present relay policy learning, a method for imitation and reinforcement learning that can solve multi-stage, long-horizon robotic tasks.

Imitation Learning reinforcement-learning +1

ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots

1 code implementation25 Sep 2019 Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar

ROBEL introduces two robots, each aimed to accelerate reinforcement learning research in different task domains: D'Claw is a three-fingered hand robot that facilitates learning dexterous manipulation tasks, and D'Kitty is a four-legged robot that facilitates learning agile legged locomotion tasks.

Continuous Control reinforcement-learning +1

Deep Dynamics Models for Learning Dexterous Manipulation

2 code implementations25 Sep 2019 Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar

Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills.

Model Predictive Control

Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real

no code implementations13 Aug 2019 Ofir Nachum, Michael Ahn, Hugo Ponte, Shixiang Gu, Vikash Kumar

Our method hinges on the use of hierarchical sim2real -- a simulated environment is used to learn low-level goal-reaching skills, which are then used as the action space for a high-level RL controller, also trained in simulation.

Reinforcement Learning (RL)

Dynamics-Aware Unsupervised Discovery of Skills

3 code implementations2 Jul 2019 Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment.

Model-based Reinforcement Learning

Learning Latent Plans from Play

1 code implementation5 Mar 2019 Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet

Learning from play (LfP) offers three main advantages: 1) It is cheap.

Robotics

Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost

no code implementations14 Oct 2018 Henry Zhu, Abhishek Gupta, Aravind Rajeswaran, Sergey Levine, Vikash Kumar

Dexterous multi-fingered robotic hands can perform a wide range of manipulation skills, making them an appealing component for general-purpose robotic manipulators.

reinforcement-learning Reinforcement Learning (RL)

Time Reversal as Self-Supervision

no code implementations2 Oct 2018 Suraj Nair, Mohammad Babaeizadeh, Chelsea Finn, Sergey Levine, Vikash Kumar

We test our method on the domain of assembly, specifically the mating of tetris-style block pairs.

Model Predictive Control

Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines

no code implementations ICLR 2018 Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M. Bayen, Sham Kakade, Igor Mordatch, Pieter Abbeel

To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP.

Policy Gradient Methods reinforcement-learning +1

Divide-and-Conquer Reinforcement Learning

1 code implementation ICLR 2018 Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine

In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice.

Policy Gradient Methods reinforcement-learning +1

Domain Randomization and Generative Models for Robotic Grasping

no code implementations17 Oct 2017 Joshua Tobin, Lukas Biewald, Rocky Duan, Marcin Andrychowicz, Ankur Handa, Vikash Kumar, Bob McGrew, Jonas Schneider, Peter Welinder, Wojciech Zaremba, Pieter Abbeel

In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis.

Object Robotic Grasping

Learning Dexterous Manipulation Policies from Experience and Imitation

no code implementations15 Nov 2016 Vikash Kumar, Abhishek Gupta, Emanuel Todorov, Sergey Levine

We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state.

Cannot find the paper you are looking for? You can Submit a new open access paper.