Probabilistic Actor-Critic: Learning to Explore with PAC-Bayes Uncertainty

Research output: Working paperResearch

10 Downloads (Pure)

Abstract

We introduce Probabilistic Actor-Critic (PAC), a novel reinforcement learning algorithm with improved continuous control performance thanks to its ability to mitigate the exploration-exploitation trade-off. PAC achieves this by seamlessly integrating stochastic policies and critics, creating a dynamic synergy between the estimation of critic uncertainty and actor training. The key contribution of our PAC algorithm is that it explicitly models and infers epistemic uncertainty in the critic through Probably Approximately Correct-Bayesian (PAC-Bayes) analysis. This incorporation of critic uncertainty enables PAC to adapt its exploration strategy as it learns, guiding the actor's decision-making process. PAC compares favorably against fixed or pre-scheduled exploration schemes of the prior art. The synergy between stochastic policies and critics, guided by PAC-Bayes analysis, represents a fundamental step towards a more adaptive and effective exploration strategy in deep reinforcement learning. We report empirical evaluations demonstrating PAC's enhanced stability and improved performance over the state of the art in diverse continuous control problems.
Original languageEnglish
DOIs
Publication statusPublished - 5. Feb 2024

Keywords

  • cs.LG

Fingerprint

Dive into the research topics of 'Probabilistic Actor-Critic: Learning to Explore with PAC-Bayes Uncertainty'. Together they form a unique fingerprint.

Cite this