Author
Listed:
- Ahmed M Hassan
- Jafar Ababneh
- Hani Attar
- Tamer Shamseldin
- Ahmed Abdelbaset
- Mohamed Eladly Metwally
Abstract
Enhancing the performance of 5ph-IPMSM control plays a crucial role in advancing various innovative applications such as electric vehicles. This paper proposes a new reinforcement learning (RL) control algorithm based twin-delayed deep deterministic policy gradient (TD3) algorithm to tune two cascaded PI controllers in a five-phase interior permanent magnet synchronous motor (5ph-IPMSM) drive system based model predictive control (MPC). The main purpose of the control methodology is to optimize the 5ph-IPMSM speed response either in constant torque region or constant power region. The speed responses obtained using RL control algorithm are compared with those obtained using four of the most recent metaheuristic optimization techniques (MHOT) which are Transit Search (TS), Honey Badger Algorithm (HBA), Dwarf Mongoose (DM), and Dandelion-Optimizer (DO) optimization techniques. The speed response are compared in terms of the settling time, rise time, maximum time and maximum overshoot percentage. It is found that the suggested RL based TD3 give minimum settling time and relatively low values for the rise time, max time and overshoot percentage which makes the RL provide superior speed responses compared with those obtained from the four MHOT. The drive system speed responses are obtained in the constant torque region and constant power region using MATLAB SIMULINK package.
Suggested Citation
Ahmed M Hassan & Jafar Ababneh & Hani Attar & Tamer Shamseldin & Ahmed Abdelbaset & Mohamed Eladly Metwally, 2025.
"Reinforcement learning algorithm for improving speed response of a five-phase permanent magnet synchronous motor based model predictive control,"
PLOS ONE, Public Library of Science, vol. 20(1), pages 1-27, January.
Handle:
RePEc:plo:pone00:0316326
DOI: 10.1371/journal.pone.0316326
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0316326. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.