IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v361y2024ics0306261924002988.html
   My bibliography  Save this article

An optimal solutions-guided deep reinforcement learning approach for online energy storage control

Author

Listed:
  • Xu, Gaoyuan
  • Shi, Jian
  • Wu, Jiaman
  • Lu, Chenbei
  • Wu, Chenye
  • Wang, Dan
  • Han, Zhu

Abstract

As renewable energy becomes more prevalent in the power grid, energy storage systems (ESSs) are playing an ever-increasingly crucial role in mitigating short-term supply–demand imbalances. However, the operation and control of ESS are not straightforward, given the ever-changing electricity prices in the market environment and the stochastic and intermittent nature of renewable energy generations, which respond to real-time load variations. In this paper, we propose a deep reinforcement learning (DRL) approach to address the electricity arbitrage problem associated with optimal ESS management. First, we analyze the structure of the optimal offline ESS control problem using the mixed-integer linear programming (MILP) formulation. This formulation identifies optimal control actions to absorb excess renewable energy and perform price arbitrage strategies. To tackle the uncertainties inherent in the prediction data, we then recast the online ESS control problem into a Markov Decision Process (MDP) framework and develop the DRL approach, which involves integrating the optimal offline control solution obtained from the training data into the training process and introducing noise to the state transitions. Unlike typical offline DRL training over a long time interval, we employ the Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO) algorithms with smaller neural networks training over a short time interval. Numerical studies demonstrate the promising potential of the proposed DRL-enabled approach for achieving better online control performance than the model predictive control (MPC) method under different price errors. This highlights the sample efficiency and robustness of our DRL approaches in managing ESS for electricity arbitrage.

Suggested Citation

  • Xu, Gaoyuan & Shi, Jian & Wu, Jiaman & Lu, Chenbei & Wu, Chenye & Wang, Dan & Han, Zhu, 2024. "An optimal solutions-guided deep reinforcement learning approach for online energy storage control," Applied Energy, Elsevier, vol. 361(C).
  • Handle: RePEc:eee:appene:v:361:y:2024:i:c:s0306261924002988
    DOI: 10.1016/j.apenergy.2024.122915
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924002988
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.122915?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:361:y:2024:i:c:s0306261924002988. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.