Time to absorption in discounted reinforcement models
AbstractReinforcement schemes are a class of non-Markovian stochastic processes. Their non-Markovian nature allows them to model some kind of memory of the past. One subclass of such models are those in which the past is exponentially discounted or forgotten. Often, models in this subclass have the property of becoming trapped with probability 1 in some degenerate state. While previous work has concentrated on such limit results, we concentrate here on a contrary effect, namely that the time to become trapped may increase exponentially in 1/x as the discount rate, 1-x, approaches 1. As a result, the time to become trapped may easily exceed the lifetime of the simulation or of the physical data being modeled. In such a case, the quasi-stationary behavior is more germane. We apply our results to a model of social network formation based on ternary (three-person) interactions with uniform positive reinforcement.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Bibliographic InfoArticle provided by Elsevier in its journal Stochastic Processes and their Applications.
Volume (Year): 109 (2004)
Issue (Month): 1 (January)
Contact details of provider:
Web page: http://www.elsevier.com/wps/find/journaldescription.cws_home/505572/description#description
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Drew Fudenberg & David Kreps, 2010.
"Learning Mixed Equilibria,"
Levine's Working Paper Archive
415, David K. Levine.
- Ellison, Glenn, 1993.
"Learning, Local Interaction, and Coordination,"
Econometric Society, vol. 61(5), pages 1047-71, September.
- Anderlini, L. & Ianni, A., 1996. "Learning on a Torus," Discussion Paper Series In Economics And Econometrics 9611, Economics Division, School of Social Sciences, University of Southampton.
- A. Barrat & M. Weigt, 2000. "On the properties of small-world network models," The European Physical Journal B - Condensed Matter and Complex Systems, Springer, vol. 13(3), pages 547-560, 02.
- Liggett, Thomas M. & Rolles, Silke W. W., 2004. "An infinite stochastic model of social network formation," Stochastic Processes and their Applications, Elsevier, vol. 113(1), pages 65-80, September.
- Brian Skyrms & Robin Pemantle, 2004. "Learning to Network," Levine's Bibliography 122247000000000436, UCLA Department of Economics.
- Pemantle, Robin & Skyrms, Brian, 2004. "Network formation by reinforcement learning: the long and medium run," Mathematical Social Sciences, Elsevier, vol. 48(3), pages 315-327, November.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei).
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.