As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms herein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms’ intrinsic variance, the environments’ stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.
|Journal||Journal of Machine Learning Research|
|Publication status||Published - 2020|
|Event||3rd Conference on Robot Learning: CoRL 2019 - Osaka, Japan|
Duration: 30. Oct 2019 → 1. Nov 2019
|Conference||3rd Conference on Robot Learning|
|Period||30/10/2019 → 01/11/2019|
- reinforcement learning