IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i23p4744-d1286414.html
   My bibliography  Save this article

Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks

Author

Listed:
  • Oladayo S. Ajani

    (School of Electronics Engineering, Kyungpook National University, Daegu 37224, Republic of Korea)

  • Sung-ho Hur

    (School of Electronics Engineering, Kyungpook National University, Daegu 37224, Republic of Korea)

  • Rammohan Mallipeddi

    (School of Electronics Engineering, Kyungpook National University, Daegu 37224, Republic of Korea)

Abstract

Domain randomization in the context of Reinforcement learning (RL) involves training RL agents with randomized environmental properties or parameters to improve the generalization capabilities of the resulting agents. Although domain randomization has been favorably studied in the literature, it has been studied in terms of varying the operational characters of the associated systems or physical dynamics rather than their environmental characteristics. This is counter-intuitive as it is unrealistic to alter the mechanical dynamics of a system in operation. Furthermore, most works were based on cherry-picked environments within different classes of RL tasks. Therefore, in this work, we investigated domain randomization by varying only the properties or parameters of the environment rather than varying the mechanical dynamics of the featured systems. Furthermore, the analysis conducted was based on all six RL locomotion tasks. In terms of training the RL agents, we employed two proven RL algorithms (SAC and TD3) and evaluated the generalization capabilities of the resulting agents on several train–test scenarios that involve both in-distribution and out-distribution evaluations as well as scenarios applicable in the real world. The results demonstrate that, although domain randomization favors generalization, some tasks only require randomization from low-dimensional distributions while others require randomization from high-dimensional randomization. Hence the question of what level of randomization is optimal for any given task becomes very important.

Suggested Citation

  • Oladayo S. Ajani & Sung-ho Hur & Rammohan Mallipeddi, 2023. "Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks," Mathematics, MDPI, vol. 11(23), pages 1-13, November.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:23:p:4744-:d:1286414
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/23/4744/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/23/4744/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:23:p:4744-:d:1286414. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.