Cycling in a stochastic learning algorithm for normal form games
In this paper we study a stochastic learning model for 2, 2 normal form games that are played repeatedly. The main emphasis is put on the emergence of cycles. We assume that the players have neither information about the payoff matrix of their opponent nor about their own. At every round each player can only observe his or her action and the payoff he or she receives. We prove that the learning algorithm, which is modeled by an urn scheme proposed by Arthur (1993), leads with positive probability to a cycling of strategy profiles if the game has a mixed Nash equilibrium. In case there are strict Nash equilibria, the learning process converges a.s. to the set of Nash equilibria.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 7 (1997)
Issue (Month): 2 ()
|Contact details of provider:|| Web page: http://link.springer.de/link/service/journals/00191/index.htm|
|Order Information:||Web: http://link.springer.de/orders.htm|
When requesting a correction, please mention this item's handle: RePEc:spr:joevec:v:7:y:1997:i:2:p:193-207. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Guenther Eichhorn)or (Christopher F Baum)
If references are entirely missing, you can add them using this form.