PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison

Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

Abstract

PAC-Bayes has recently re-emerged as an effective theory with which one can derive principled learning algorithms with tight performance guarantees. However, applications of PAC-Bayes to bandit problems are relatively rare, which is a great misfortune. Many decision-making problems in healthcare, finance and natural sciences can be modelled as bandit problems. In many of these applications, principled algorithms with strong performance guarantees would be very much appreciated. This survey provides an overview of PAC-Bayes bounds for bandit problems and an experimental comparison of these bounds. On the one hand, we found that PAC-Bayes bounds are a useful tool for designing offline bandit algorithms with performance guarantees. In our experiments, a PAC-Bayesian offline contextual bandit algorithm was able to learn randomised neural network polices with competitive expected reward and non-vacuous performance guarantees. On the other hand, the PAC-Bayesian online bandit algorithms that we tested had loose cumulative regret bounds. We conclude by discussing some topics for future work on PAC-Bayesian bandit algorithms.

OriginalsprogEngelsk
TidsskriftIEEE Transactions on Pattern Analysis and Machine Intelligence
Vol/bind45
Udgave nummer12
Sider (fra-til)15308-15327
ISSN0162-8828
DOI
StatusUdgivet - dec. 2023

Bibliografisk note

Publisher Copyright:
IEEE

Fingeraftryk

Dyk ned i forskningsemnerne om 'PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison'. Sammen danner de et unikt fingeraftryk.

Citationsformater