Trade-off on Sim2Real Learning: Real-world Learning Faster than Simulations

21 Jul 2020  ·  Jingyi Huang, Yizheng Zhang, Fabio Giardina, Andre Rosendo ·

Deep Reinforcement Learning (DRL) experiments are commonly performed in simulated environments due to the tremendous training sample demands from deep neural networks. In contrast, model-based Bayesian Learning allows a robot to learn good policies within a few trials in the real world. Although it takes fewer iterations, Bayesian methods pay a relatively higher computational cost per trial, and the advantage of such methods is strongly tied to dimensionality and noise. In here, we compare a Deep Bayesian Learning algorithm with a model-free DRL algorithm while analyzing our results collected from both simulations and real-world experiments. While considering Sim and Real learning, our experiments show that the sample-efficient Deep Bayesian RL performance is better than DRL even when computation time (as opposed to number of iterations) is taken in consideration. Additionally, the difference in computation time between Deep Bayesian RL performed in simulation and in experiments point to a viable path to traverse the reality gap. We also show that a mix between Sim and Real does not outperform a purely Real approach, pointing to the possibility that reality can provide the best prior knowledge to a Bayesian Learning. Roboticists design and build robots every day, and our results show that a higher learning efficiency in the real-world will shorten the time between design and deployment by skipping simulations.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods