Free-Riding And Delegation In Research Teams
This paper analyzes a two-player game of strategic experimentation with three-armed exponential bandits, where players have to decide in continuous time how to distribute their endowment flow over the arms of their respective bandit. Players face replica bandits, with one arm that is safe in that it generates a known payoff, whereas the likelihood of the risky arms' yielding a positive payoff is initially unknown. It is common knowledge that the types of the two risky arms are perfectly negatively correlated. I show that the efficient policy is incentive-compatible if, and only if, the stakes are high enough. Moreover, learning will be complete in any Markov perfect equilibrium if, and only if, the stakes exceed a certain threshold.
|Date of creation:||2009|
|Date of revision:|
|Contact details of provider:|| Postal: Society for Economic Dynamics Marina Azzimonti Department of Economics Stonybrook University 10 Nicolls Road Stonybrook NY 11790 USA|
Web page: http://www.EconomicDynamics.org/
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:red:sed009:253. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Christian Zimmermann)
If references are entirely missing, you can add them using this form.