Author
Listed:
- KAGAN TUMER
(Oregon State University, 204 Rogers Hall, Corvallis, Oregon 97331, USA)
- NEWSHA KHANI
(Oregon State University, 204 Rogers Hall, Corvallis, Oregon 97331, USA)
Abstract
In large cooperative multiagent systems, coordinating the actions of the agents is critical to the overall system achieving its intended goal. Even when the agents aim to cooperate, ensuring that the agent actions lead to good system level behavior becomes increasingly difficult as systems become larger. One of the fundamental difficulties in such multiagent systems is the slow learning process where an agent not only needs to learn how to behave in a complex environment, but also needs to account for the actions of other learning agents. In this paper, we present a multiagent learning approach that significantly improves the learning speed in multiagent systems by allowing an agent to update its estimate of the rewards (e.g. value function in reinforcement learning) for all its available actions, not just the action that was taken. This approach is based on an agent estimating the counterfactual reward it would have received had it taken a particular action. Our results show that the rewards on such "actions not taken" are beneficial early in training, particularly when only particular "key" actions are used. We then present results where agent teams are leveraged to estimate those rewards. Finally, we show that the improved learning speed is critical in dynamic environments where fast learning is critical to tracking the underlying processes.
Suggested Citation
Kagan Tumer & Newsha Khani, 2009.
"Learning From Actions Not Taken In Multiagent Systems,"
Advances in Complex Systems (ACS), World Scientific Publishing Co. Pte. Ltd., vol. 12(04n05), pages 455-473.
Handle:
RePEc:wsi:acsxxx:v:12:y:2009:i:04n05:n:s0219525909002301
DOI: 10.1142/S0219525909002301
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wsi:acsxxx:v:12:y:2009:i:04n05:n:s0219525909002301. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Tai Tone Lim (email available below). General contact details of provider: http://www.worldscinet.com/acs/acs.shtml .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.