IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v18y2025i18p4941-d1751277.html
   My bibliography  Save this article

Design of Robust Adaptive Nonlinear Backstepping Controller Enhanced by Deep Deterministic Policy Gradient Algorithm for Efficient Power Converter Regulation

Author

Listed:
  • Seyyed Morteza Ghamari

    (School of Engineering, Edith Cowan University, Joondalup 6027, Australia)

  • Asma Aziz

    (School of Engineering, Edith Cowan University, Joondalup 6027, Australia)

  • Mehrdad Ghahramani

    (School of Engineering, Edith Cowan University, Joondalup 6027, Australia)

Abstract

Power converters play an important role in incorporating renewable energy sources into power systems. Among different converter designs, Buck and Boost converters are popular, as they use fewer components and deliver cost savings and high efficiency. However, Boost converters are known as non–minimum phase systems, imposing harder constraints for designing a robust converter. Developing an efficient controller for these topologies can be difficult since they exhibit nonlinearity and distortion in high frequency modes. The Lyapunov-based Adaptive Backstepping Control (ABSC) technology is used to regulate suitable outputs for these structures. This approach is an updated version of the technique that uses the stability Lyapunov function to produce increased stability and resistance to fluctuations in real-world circumstances. However, in real-time situations, disturbances with larger ranges such as supply voltage changes, parameter variations, and noise may have a negative impact on the operation of this strategy. To increase the controller’s flexibility under more difficult working settings, the most appropriate first gains must be established. To solve these concerns, the ABSC’s performance is optimized using the Reinforcement Learning (RL) adaptive technique. RL has several advantages, including lower susceptibility to error, more trustworthy findings obtained from data gathering from the environment, perfect model behavior within a certain context, and better frequency matching in real-time applications. Random exploration, on the other hand, can have disastrous effects and produce unexpected results in real-world situations. As a result, we choose the Deep Deterministic Policy Gradient (DDPG) approach, which uses a deterministic action function rather than a stochastic one. Its key advantages include effective handling of continuous action spaces, improved sample efficiency through off-policy learning, and faster convergence via its actor–critic architecture that balances value estimation and policy optimization. Furthermore, this technique uses the Grey Wolf Optimization (GWO) algorithm to improve the initial set of gains, resulting in more reliable outcomes and quicker dynamics. The GWO technique is notable for its disciplined and nature-inspired approach, which leads to faster decision-making and greater accuracy than other optimization methods. This method considers the system as a black box without its exact mathematical modeling, leading to lower complexity and computational burden. The effectiveness of this strategy is tested in both modeling and experimental scenarios utilizing the Hardware-In-Loop (HIL) framework, with considerable results and decreased error sensitivity.

Suggested Citation

  • Seyyed Morteza Ghamari & Asma Aziz & Mehrdad Ghahramani, 2025. "Design of Robust Adaptive Nonlinear Backstepping Controller Enhanced by Deep Deterministic Policy Gradient Algorithm for Efficient Power Converter Regulation," Energies, MDPI, vol. 18(18), pages 1-27, September.
  • Handle: RePEc:gam:jeners:v:18:y:2025:i:18:p:4941-:d:1751277
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/18/18/4941/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/18/18/4941/
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:18:y:2025:i:18:p:4941-:d:1751277. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.