This paper revisits a recent study by Posen and Levinthal (Manag Sci 58:587–601, 2012) on the exploration/exploitation tradeoff for a multi-armed bandit problem, where the reward probabilities undergo random shocks. We show that their analysis suffers two shortcomings: it assumes that learning is based on stale evidence, and it overlooks the steady state. We let the learning rule endogenously discard stale evidence, and we perform the long run analyses. The comparative study demonstrates that some of their conclusions must be qualified.
|Titel||Artificial Economics and Self Organization : Agent-Based Approaches to Economics and Social Systems|
|Redaktører||Stephan Leitner, Friederike Wall|
|Status||Udgivet - 2014|
|Navn||Lecture Notes in Economics and Mathematical Systems|