Strategic Experimentation with Poisson Bandits
We study a game of strategic experimentation with two-armed bandits where the risky arm distributes lump-sum payoffs according to a Poisson process. Its intensity is either high or low, and unknown to the players. We consider Markov perfect equilibria with beliefs as the state variable. As the belief process is piecewise deterministic, payoff functions solve differential-difference equations. There is no equilibrium where all players use cut-off strategies, and all equilibria exhibit an 'encouragement effect' relative to the single-agent optimum. We construct asymmetric equilibria in which players have symmetric continuation values at sufficiently optimistic beliefs yet take turns playing the risky arm before all experimentation stops. Owing to the encouragement effect, these equilibria Pareto dominate the unique symmetric one for sufficiently frequent turns. Rewarding the last experimenter with a higher continuation value increases the range of beliefs where players experiment, but may reduce average payoffs at more optimistic beliefs. Some equilibria exhibit an 'anticipation effect': as beliefs become more pessimistic, the continuation value of a single experimenter increases over some range because a lower belief means a shorter wait until another player takes over.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
|Date of creation:||Apr 2009|
|Date of revision:|
|Contact details of provider:|| Postal: |
Phone: 44 - 20 - 7183 8801
Fax: 44 - 20 - 7183 8820
|Order Information:|| Email: |
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Rothschild, Michael, 1974. "A two-armed bandit theory of market pricing," Journal of Economic Theory, Elsevier, vol. 9(2), pages 185-202, October.
- Godfrey Keller & Martin Cripps, 2003.
"Strategic Experimentation with Exponential Bandits,"
Economics Series Working Papers
143, University of Oxford, Department of Economics.
- Godfrey Keller & Sven Rady & Martin Cripps, 2005. "Strategic Experimentation with Exponential Bandits," Econometrica, Econometric Society, vol. 73(1), pages 39-68, 01.
- Cripps, Martin William & Keller, R Godfrey & Rady, Sven, 2003. "Strategic Experimentation with Exponential Bandits," CEPR Discussion Papers 3814, C.E.P.R. Discussion Papers.
- Cripps, Martin & Keller, Godfrey & Rady, Sven, 2003. "Strategic Experimentation with Exponential Bandits," Discussion Papers in Economics 4, University of Munich, Department of Economics.
- Patrick Bolton & Christopher Harris, 1999. "Strategic Experimentation," Econometrica, Econometric Society, vol. 67(2), pages 349-374, March.
- Dirk Bergemann & Juuso Valimaki, 2006. "Bandit Problems," Cowles Foundation Discussion Papers 1551, Cowles Foundation for Research in Economics, Yale University.
- Guiseppe Moscarini & Francesco Squintani, 2004. "Competitive Experimentation with Private Information," Cowles Foundation Discussion Papers 1489, Cowles Foundation for Research in Economics, Yale University.
- Decamps, Jean-Paul & Mariotti, Thomas, 2004. "Investment timing and learning externalities," Journal of Economic Theory, Elsevier, vol. 118(1), pages 80-102, September.
When requesting a correction, please mention this item's handle: RePEc:cpr:ceprdp:7270. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()The email address of this maintainer does not seem to be valid anymore. Please ask to update the entry or send us the correct address
If references are entirely missing, you can add them using this form.