This paper revisits a recent study by Posen and Levinthal (Manag Sci 58:587–601, 2012) on the exploration/exploitation tradeoff for a multi-armed bandit problem, where the reward probabilities undergo random shocks. We show that their analysis suffers two shortcomings: it assumes that learning is based on stale evidence, and it overlooks the steady state. We let the learning rule endogenously discard stale evidence, and we perform the long run analyses. The comparative study demonstrates that some of their conclusions must be qualified.
|Title of host publication||Artificial Economics and Self Organization : Agent-Based Approaches to Economics and Social Systems|
|Editors||Stephan Leitner, Friederike Wall|
|Publication status||Published - 2014|
|Series||Lecture Notes in Economics and Mathematical Systems|