Learning in agent based models
This paper examines the process by which agents learn to act in economic environments. Learning is particularly complicated in such situations since the environment is, at least in part, made up of other agents who are also learning. At best, one can hope to obtain analytical results for a rudimentary model. To make progress in understanding the dynamics of learning and coordination in general cases one can simulate agent based models to see whether the results obtained in skeletal models translate into the more general case. Using this approach can help us to understand which are the crucial assumptions in determining whether learning converges and, if so, to which sort of state. Three examples are presented, one in which agents learn to form trading relationships, one in which agents misspecify the model of their environment and a last one in which agents may learn to take actions which are systematically favourable, (or unfavourable) for them. In each case simulating models in which agents operate with simple rules in a complex environment, allows us to examine the role of the type of learning process used by the agents the extent to which they coordinate on a final outcome and the nature of that outcome.
|Date of creation:||09 Dec 2010|
|Date of revision:|
|Note:||View the original document on HAL open archive server: http://halshs.archives-ouvertes.fr/halshs-00545169/en/|
|Contact details of provider:|| Web page: http://hal.archives-ouvertes.fr/|
When requesting a correction, please mention this item's handle: RePEc:hal:wpaper:halshs-00545169. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (CCSD)
If references are entirely missing, you can add them using this form.