A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots

Nicolai Anton Lynnerup*, Laura Nolling Jensen, Rasmus Hasle, John Hallam

*Corresponding author for this work

Research output: Contribution to journalConference articleResearchpeer-review

Abstract

As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms herein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms’ intrinsic variance, the environments’ stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.
Original languageEnglish
JournalJournal of Machine Learning Research
Volume100
Pages (from-to)466-489
ISSN1532-4435
Publication statusPublished - 2020
Event3rd Conference on Robot Learning: CoRL 2019 - Osaka, Japan
Duration: 30. Oct 20191. Nov 2019

Conference

Conference3rd Conference on Robot Learning
Country/TerritoryJapan
CityOsaka
Period30/10/201901/11/2019

Keywords

  • CoRL
  • robots
  • learning
  • reinforcement learning
  • reproducibility
  • statistics

Fingerprint

Dive into the research topics of 'A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots'. Together they form a unique fingerprint.

Cite this