IDEAS home Printed from https://ideas.repec.org/a/igg/jsir00/v7y2016i3p23-42.html
   My bibliography  Save this article

Reinforcement Learning with Particle Swarm Optimization Policy (PSO-P) in Continuous State and Action Spaces

Author

Listed:
  • Daniel Hein

    (Technische, Universität München, Munich, Germany)

  • Alexander Hentschel

    (Siemens AG, Munich, Germany)

  • Thomas A. Runkler

    (Siemens AG, Munich, Germany)

  • Steffen Udluft

    (Siemens AG, Munich, Germany)

Abstract

This article introduces a model-based reinforcement learning (RL) approach for continuous state and action spaces. While most RL methods try to find closed-form policies, the approach taken here employs numerical on-line optimization of control action sequences. First, a general method for reformulating RL problems as optimization tasks is provided. Subsequently, Particle Swarm Optimization (PSO) is applied to search for optimal solutions. This Particle Swarm Optimization Policy (PSO-P) is effective for high dimensional state spaces and does not require a priori assumptions about adequate policy representations. Furthermore, by translating RL problems into optimization tasks, the rich collection of real-world inspired RL benchmarks is made available for benchmarking numerical optimization techniques. The effectiveness of PSO-P is demonstrated on the two standard benchmarks: mountain car and cart pole.

Suggested Citation

  • Daniel Hein & Alexander Hentschel & Thomas A. Runkler & Steffen Udluft, 2016. "Reinforcement Learning with Particle Swarm Optimization Policy (PSO-P) in Continuous State and Action Spaces," International Journal of Swarm Intelligence Research (IJSIR), IGI Global, vol. 7(3), pages 23-42, July.
  • Handle: RePEc:igg:jsir00:v:7:y:2016:i:3:p:23-42
    as

    Download full text from publisher

    File URL: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJSIR.2016070102
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Stefano Bromuri, 2019. "Dynamic heuristic acceleration of linearly approximated SARSA( $$\lambda $$ λ ): using ant colony optimization to learn heuristics dynamically," Journal of Heuristics, Springer, vol. 25(6), pages 901-932, December.
    2. Hosseini, Ehsan & Aghadavoodi, Ehsan & Fernández Ramírez, Luis M., 2020. "Improving response of wind turbines by pitch angle controller based on gain-scheduled recurrent ANFIS type 2 with passive reinforcement learning," Renewable Energy, Elsevier, vol. 157(C), pages 897-910.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:igg:jsir00:v:7:y:2016:i:3:p:23-42. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Journal Editor (email available below). General contact details of provider: https://www.igi-global.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.