Author
Listed:
- Cestero, Julen
- Delle Femine, Carmine
- S. Muro, Kenji
- Quartulli, Marco
- Restelli, Marcello
Abstract
Optimizing the energy management within a smart grid scenario presents significant challenges, primarily due to the complexity of real-world systems and the intricate interactions among various components. Reinforcement Learning (RL) is gaining prominence as a solution for addressing the challenges of Optimal Power Flow (OPF) in smart grids. However, RL needs to iterate compulsively throughout a given environment to obtain the optimal policy. This means obtaining samples from a, most likely, costly simulator, which can lead to a sample efficiency problem. In this work, we address this problem by substituting costly smart grid simulators with surrogate models built using Physics-Informed Neural Networks (PINNs), optimizing the RL policy training process by arriving at convergent results in a fraction of the time employed by the original environment. Specifically, we tested the performance of our PINN surrogate against other state-of-the-art data-driven surrogates and found that the understanding of the underlying physical nature of the problem makes the PINN surrogate the only method we studied capable of learning a good RL policy, in addition to not having to use samples from the real simulator. Our work shows that, by employing PINN surrogates, we can improve training speed by 50 %, compared to training the RL policy without using any surrogate model, enabling us to achieve results with scores on par with the original simulator more rapidly.
Suggested Citation
Cestero, Julen & Delle Femine, Carmine & S. Muro, Kenji & Quartulli, Marco & Restelli, Marcello, 2025.
"Optimizing energy management of smart grid using reinforcement learning aided by surrogate models built using physics-informed neural networks,"
Applied Energy, Elsevier, vol. 401(PC).
Handle:
RePEc:eee:appene:v:401:y:2025:i:pc:s0306261925014801
DOI: 10.1016/j.apenergy.2025.126750
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:401:y:2025:i:pc:s0306261925014801. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.