Reinforcement Learning Rules in a Repeated Game
This paper examines the performance of simple reinforcement learning algorithms in a stationary environment and in a repeated game where the environment evolves endogenously based on the actions of other agents. Some types of reinforcement learning rules can be extremely sensitive to small changes in the initial conditions, consequently, events early in a simulation can affect the performance of the rule over a relatively long time horizon. However, when multiple adaptive agents interact, algorithms that performed poorly in a stationary environment often converge rapidly to a stable aggregate behaviors despite the slow and erratic behavior of individual learners. Algorithms that are robust in stationary environments can exhibit slow convergence in an evolving environment. Copyright 2001 by Kluwer Academic Publishers
Volume (Year): 18 (2001)
Issue (Month): 1 (August)
|Contact details of provider:|| Web page: http://www.springerlink.com/link.asp?id=100248|
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:kap:compec:v:18:y:2001:i:1:p:89-110. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Sonal Shukla)or (Rebekah McClure)
If references are entirely missing, you can add them using this form.