A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots

9 Sep 2019  ·  Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam ·

As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms herein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms' intrinsic variance, the environments' stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here