Experience-Weighted Attraction Learning in Games: A Unifying Approach
We describe a general model, 'experience-weighted attraction' (EWA) learning, which includes reinforcement learning and a class of weighted fictitious play belief models as special cases. In EWA, strategies have attractions which reflect prior predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ which weights the strength of hypothetical reinforcement of strategies which were not chosen according to the payoff they would have yielded. When δ = 0 choice reinforcement results. When δ = 1, levels of reinforcement of strategies are proportional to expected payoffs given beliefs based on past history. Another key feature is the growth rates of attractions. The EWA model controls the growth rates by two decay parameters, φ and ρ, which depreciate attractions and amount of experience separately. When φ = ρ belief-based models result; when ρ = 0 choice reinforcement results. Using three data sets, parameter estimates of the model were calibrated on part of the data and used to predict the rest. Estimates of δ are generally around .50, φ around 1, and ρ varies from 0 to φ. Choice reinforcement models often outperform belief-based models in the calibration phase and underperform in out-of-sample validation. Both special cases are generally rejected in favor of EWA, though sometimes belief models do better. EWA is able to combine the best features of both approaches, allowing attractions to begin and grow exibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do.
|Date of creation:||Mar 1997|
|Date of revision:|
|Contact details of provider:|| Postal: |
Phone: 626 395-4065
Fax: 626 405-9841
Web page: http://www.hss.caltech.edu/ss
|Order Information:|| Postal: Working Paper Assistant, Division of the Humanities and Social Sciences, 228-77, Caltech, Pasadena CA 91125|
When requesting a correction, please mention this item's handle: RePEc:clt:sswopa:1003. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Victoria Mason)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.