Advanced Search
MyIDEAS: Login

Stochastic Evolution with Slow Learning

Contents:

Author Info

  • Beggs, A.

Abstract

This paper studies the extent to which diffusion approximations provide a reliable guide to equilibrium selection results in finite games. It is shown that they do for a class of finite games with weak learning provided that limits are taken in a certain order. The paper also shows that making mutation rates small does not in general select a unique equilibrium but making selection strong does.

Download Info

To our knowledge, this item is not available for download. To find whether it is available, there are three options:
1. Check below under "Related research" whether another version of this item is available online.
2. Check on the provider's web page whether it is in fact available.
3. Perform a search for a similarly titled item that would be available.

Bibliographic Info

Paper provided by University of Oxford, Department of Economics in its series Economics Series Working Papers with number 9933.

as in new window
Length: 29 pages
Date of creation: 2000
Date of revision:
Handle: RePEc:oxf:wpaper:9933

Contact details of provider:
Postal: Manor Rd. Building, Oxford, OX1 3UQ
Email:
Web page: http://www.economics.ox.ac.uk/
More information through EDIRC

Related research

Keywords: GAMES ; RISK ; MUTATIONS;

Other versions of this item:

Find related papers by JEL classification:

References

No references listed on IDEAS
You can help add them by filling out this form.

Citations

Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as in new window

Cited by:
  1. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
  2. Dai, Darong, 2010. "The Evolution of Cooperation in a Generalized Moran Process," MPRA Paper 40511, University Library of Munich, Germany.
  3. Sandholm, William H., 2003. "Evolution and equilibrium under inexact information," Games and Economic Behavior, Elsevier, vol. 44(2), pages 343-378, August.
  4. Beggs, A.W., 2007. "Large deviations and equilibrium selection in large populations," Journal of Economic Theory, Elsevier, vol. 132(1), pages 383-410, January.
  5. Sandholm,W.H., 1999. "Markov evolution with inexact information," Working papers 15, Wisconsin Madison - Social Systems.
  6. Dai, Darong, 2010. "一般化Moran过程中的合作演化
    [The Evolution of Cooperation in a Generalized Moran Process]
    ," MPRA Paper 40261, University Library of Munich, Germany.
  7. Dai, Darong, 2012. "Learning Nash Equilibria," MPRA Paper 40040, University Library of Munich, Germany.

Lists

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

Statistics

Access and download statistics

Corrections

When requesting a correction, please mention this item's handle: RePEc:oxf:wpaper:9933. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Caroline Wise).

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.