Search Results for author: Gaurav Sukhatme

Found 16 papers, 6 papers with code

Learning to Act with Affordance-Aware Multimodal Neural SLAM

1 code implementation24 Jan 2022 Zhiwei Jia, Kaixiang Lin, Yizhou Zhao, Qiaozi Gao, Govind Thattai, Gaurav Sukhatme

Recent years have witnessed an emerging paradigm shift toward embodied artificial intelligence, in which an agent must learn to solve challenging tasks by interacting with its environment.

Efficient Exploration Test unseen

Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion

1 code implementation10 Aug 2021 Alessandro Suglia, Qiaozi Gao, Jesse Thomason, Govind Thattai, Gaurav Sukhatme

Language-guided robots performing home and office tasks must navigate in and interact with the world.

Towards Exploiting Geometry and Time for Fast Off-Distribution Adaptation in Multi-Task Robot Learning

no code implementations24 Jun 2021 K. R. Zentner, Ryan Julian, Ujjwal Puri, Yulun Zhang, Gaurav Sukhatme

We explore possible methods for multi-task transfer learning which seek to exploit the shared physical structure of robotics tasks.

Transfer Learning

Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning

1 code implementation ICML 2020 Aleksei Petrenko, Zhehui Huang, Tushar Kumar, Gaurav Sukhatme, Vladlen Koltun

In this work we aim to solve this problem by optimizing the efficiency and resource utilization of reinforcement learning algorithms instead of relying on distributed computation.

FPS Games General Reinforcement Learning +2

Meta Learning via Learned Loss

no code implementations25 Sep 2019 Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav Sukhatme, Franziska Meier

We present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures.

Meta-Learning reinforcement-learning

Meta-Learning via Learned Loss

1 code implementation12 Jun 2019 Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav Sukhatme, Franziska Meier

This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time.

Meta-Learning

Accelerating Goal-Directed Reinforcement Learning by Model Characterization

no code implementations4 Jan 2019 Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu

Then, we leverage this approximate model along with a notion of reachability using Mean First Passage Times to perform Model-Based reinforcement learning.

Model-based Reinforcement Learning Q-Learning +1

Reachability and Differential based Heuristics for Solving Markov Decision Processes

no code implementations3 Jan 2019 Shoubhik Debnath, Lantao Liu, Gaurav Sukhatme

The solution convergence of Markov Decision Processes (MDPs) can be accelerated by prioritized sweeping of states ranked by their potential impacts to other states.

Simulator Predictive Control: Using Learned Task Representations and MPC for Zero-Shot Generalization and Sequencing

1 code implementation4 Oct 2018 Zhanpeng He, Ryan Julian, Eric Heiden, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

We complete unseen tasks by choosing new sequences of skill latents to control the robot using MPC, where our MPC model is composed of the pre-trained skill policy executed in the simulation environment, run in parallel with the real robot.

Scaling simulation-to-real transfer by learning composable robot skills

1 code implementation26 Sep 2018 Ryan Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

In particular, we first use simulation to jointly learn a policy for a set of low-level skills, and a "skill embedding" parameterization which can be used to compose them.

Region Growing Curriculum Generation for Reinforcement Learning

no code implementations4 Jul 2018 Artem Molchanov, Karol Hausman, Stan Birchfield, Gaurav Sukhatme

In this work, we introduce a method based on region-growing that allows learning in an environment with any pair of initial and goal states.

reinforcement-learning

Interactive Perception: Leveraging Action in Perception and Perception in Action

no code implementations13 Apr 2016 Jeannette Bohg, Karol Hausman, Bharath Sankaran, Oliver Brock, Danica Kragic, Stefan Schaal, Gaurav Sukhatme

Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment.

Robotics

Decentralized Data Fusion and Active Sensing with Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena

no code implementations9 Aug 2014 Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan, Ali Oran, Patrick Jaillet, John Dolan, Gaurav Sukhatme

The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots.

Cannot find the paper you are looking for? You can Submit a new open access paper.