Search Results for author: Vaibhav Mathur

Found 2 papers, 1 papers with code

Watch and Match: Supercharging Imitation with Regularized Optimal Transport

no code implementations30 Jun 2022 Siddhant Haldar, Vaibhav Mathur, Denis Yarats, Lerrel Pinto

Our experiments on 20 visual control tasks across the DeepMind Control Suite, the OpenAI Robotics Suite, and the Meta-World Benchmark demonstrate an average of 7. 8X faster imitation to reach 90% of expert performance compared to prior state-of-the-art methods.

Imitation Learning

Hydra: A Peer to Peer Distributed Training & Data Collection Framework

1 code implementation24 Nov 2018 Vaibhav Mathur, Karanbir Chahal

Hydra couples a specialized distributed training framework on a network of these low powered devices with a reward scheme that incentivizes users to provide high quality data to unleash the compute capability on this training framework.

Cannot find the paper you are looking for? You can Submit a new open access paper.