Author
Listed:
- Bashyal, Atit
- Boroukhian, Tina
- Veerachanchai, Pakin
- Naransukh, Myanganbayar
- Wicaksono, Hendro
Abstract
Energy-centric decarbonization of heavy industries, such as steel and cement, necessitates their participation in integrating Renewable Energy Sources (RES) and effective Demand Response (DR) programs. This situation has created the opportunities to research control algorithms in diverse DR scenarios. Further, the industrial sector’s unique challenges, including the diversity of operations and the need for uninterrupted production, bring unique challenges in designing and implementing control algorithms. Reinforcement learning (RL) methods are practical solutions to the unique challenges faced by the industrial sector. Nevertheless, research in RL for industrial demand response has not yet achieved the level of standardization seen in other areas of RL research, hindering broader progress. To propel the research progress, we propose a multi-agent reinforcement learning (MARL)-based energy management system designed to optimize energy consumption in energy-intensive industrial settings by leveraging dynamic pricing DR schemes. The study highlights the creation of a MARL environment and addresses these challenges by designing a general framework that allows researchers to replicate and implement MARL environments for industrial sectors. The proposed framework incorporates a Partially Observable Markov Decision Process (POMDP) to model energy consumption and production processes while introducing buffer storage constraints and a flexible reward function that balances production efficiency and cost reduction. The paper evaluates the framework through experimental validation within a steel powder manufacturing facility. The experimental results validate our framework and also demonstrate the effectiveness of the MARL-based energy management system.
Suggested Citation
Bashyal, Atit & Boroukhian, Tina & Veerachanchai, Pakin & Naransukh, Myanganbayar & Wicaksono, Hendro, 2025.
"Multi-agent deep reinforcement learning based demand response and energy management for heavy industries with discrete manufacturing systems,"
Applied Energy, Elsevier, vol. 392(C).
Handle:
RePEc:eee:appene:v:392:y:2025:i:c:s0306261925007202
DOI: 10.1016/j.apenergy.2025.125990
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:392:y:2025:i:c:s0306261925007202. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.