Search Results for author: Gaurav Sukhatme

Found 22 papers, 6 papers with code

Exploiting Generalization in Offline Reinforcement Learning via Unseen State Augmentations

no code implementations7 Aug 2023 Nirbhay Modhe, Qiaozi Gao, Ashwin Kalyan, Dhruv Batra, Govind Thattai, Gaurav Sukhatme

Offline reinforcement learning (RL) methods strike a balance between exploration and exploitation by conservative value estimation -- penalizing values of unseen states and actions.

Offline RL reinforcement-learning +1

Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning

no code implementations23 May 2023 Sumeet Batra, Bryon Tjanaka, Matthew C. Fontaine, Aleksei Petrenko, Stefanos Nikolaidis, Gaurav Sukhatme

However, recent advances in high-throughput, massively parallelized robotic simulators have opened the door for algorithms that can take advantage of such parallelism, and it is unclear how to scale existing off-policy QD-RL methods to these new data-rich regimes.

reinforcement-learning Reinforcement Learning (RL)

Learning Robot Manipulation from Cross-Morphology Demonstration

no code implementations7 Apr 2023 Gautam Salhotra, I-Chun Arthur Liu, Gaurav Sukhatme

Some Learning from Demonstrations (LfD) methods handle small mismatches in the action spaces of the teacher and student.

Imitation Learning Robot Manipulation

Language-Informed Transfer Learning for Embodied Household Activities

no code implementations12 Jan 2023 Yuqian Jiang, Qiaozi Gao, Govind Thattai, Gaurav Sukhatme

For service robots to become general-purpose in everyday household environments, they need not only a large library of primitive skills, but also the ability to quickly learn novel tasks specified by users.

Semantic Similarity Semantic Textual Similarity +1

Learning to Act with Affordance-Aware Multimodal Neural SLAM

1 code implementation24 Jan 2022 Zhiwei Jia, Kaixiang Lin, Yizhou Zhao, Qiaozi Gao, Govind Thattai, Gaurav Sukhatme

With the proposed Affordance-aware Multimodal Neural SLAM (AMSLAM) approach, we obtain more than 40% improvement over prior published work on the ALFRED benchmark and set a new state-of-the-art generalization performance at a success rate of 23. 48% on the test unseen scenes.

Efficient Exploration Test unseen

Towards Exploiting Geometry and Time for Fast Off-Distribution Adaptation in Multi-Task Robot Learning

no code implementations24 Jun 2021 K. R. Zentner, Ryan Julian, Ujjwal Puri, Yulun Zhang, Gaurav Sukhatme

We explore possible methods for multi-task transfer learning which seek to exploit the shared physical structure of robotics tasks.

Transfer Learning

Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning

4 code implementations ICML 2020 Aleksei Petrenko, Zhehui Huang, Tushar Kumar, Gaurav Sukhatme, Vladlen Koltun

In this work we aim to solve this problem by optimizing the efficiency and resource utilization of reinforcement learning algorithms instead of relying on distributed computation.

FPS Games General Reinforcement Learning +3

Meta Learning via Learned Loss

no code implementations25 Sep 2019 Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav Sukhatme, Franziska Meier

We present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures.

Meta-Learning reinforcement-learning +1

Meta-Learning via Learned Loss

1 code implementation12 Jun 2019 Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav Sukhatme, Franziska Meier

This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time.

Meta-Learning Test

Accelerating Goal-Directed Reinforcement Learning by Model Characterization

no code implementations4 Jan 2019 Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu

Then, we leverage this approximate model along with a notion of reachability using Mean First Passage Times to perform Model-Based reinforcement learning.

Model-based Reinforcement Learning Q-Learning +2

Reachability and Differential based Heuristics for Solving Markov Decision Processes

no code implementations3 Jan 2019 Shoubhik Debnath, Lantao Liu, Gaurav Sukhatme

The solution convergence of Markov Decision Processes (MDPs) can be accelerated by prioritized sweeping of states ranked by their potential impacts to other states.

Simulator Predictive Control: Using Learned Task Representations and MPC for Zero-Shot Generalization and Sequencing

1 code implementation4 Oct 2018 Zhanpeng He, Ryan Julian, Eric Heiden, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

We complete unseen tasks by choosing new sequences of skill latents to control the robot using MPC, where our MPC model is composed of the pre-trained skill policy executed in the simulation environment, run in parallel with the real robot.

Scaling simulation-to-real transfer by learning composable robot skills

1 code implementation26 Sep 2018 Ryan Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

In particular, we first use simulation to jointly learn a policy for a set of low-level skills, and a "skill embedding" parameterization which can be used to compose them.

Region Growing Curriculum Generation for Reinforcement Learning

no code implementations4 Jul 2018 Artem Molchanov, Karol Hausman, Stan Birchfield, Gaurav Sukhatme

In this work, we introduce a method based on region-growing that allows learning in an environment with any pair of initial and goal states.

reinforcement-learning Reinforcement Learning (RL)

Interactive Perception: Leveraging Action in Perception and Perception in Action

no code implementations13 Apr 2016 Jeannette Bohg, Karol Hausman, Bharath Sankaran, Oliver Brock, Danica Kragic, Stefan Schaal, Gaurav Sukhatme

Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment.


Decentralized Data Fusion and Active Sensing with Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena

no code implementations9 Aug 2014 Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan, Ali Oran, Patrick Jaillet, John Dolan, Gaurav Sukhatme

The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots.

Cannot find the paper you are looking for? You can Submit a new open access paper.