IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v268y2020ics0306261920304554.html
   My bibliography  Save this article

Operation scheduling in a solar thermal system: A reinforcement learning-based framework

Author

Listed:
  • Correa-Jullian, Camila
  • López Droguett, Enrique
  • Cardemil, José Miguel

Abstract

Reinforcement learning (RL) provides an alternative method for designing condition-based decision making in engineering systems. In this study, a simple and flexible RL tabular Q-learning framework is employed to identify the optimal operation schedules for a solar hot water system according to action–reward feedback. The system is simulated in TRNSYS software. Three energy sources must supply a building’s hot-water demand: low-cost heat from solar thermal collectors and a heat-recovery chiller, coupled to a conventional heat pump. Key performance indicators are used as rewards for balancing the system’s performance with regard to energy efficiency, heat-load delivery, and operational costs. A sensitivity analysis is performed for different reward functions and meteorological conditions. Optimal schedules are obtained for selected scenarios in January, April, July, and October, according to the dynamic conditions of the system. The results indicate that when solar radiation is widely available (October through April), the nominal operation schedule frequently yields the highest performance. However, the obtained schedule differs when the solar radiation is reduced, for instance, in July. On average, with prioritization of the efficient use of both low-cost energy sources, the performance in July can be on average 21% higher than under nominal schedule-based operation.

Suggested Citation

  • Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
  • Handle: RePEc:eee:appene:v:268:y:2020:i:c:s0306261920304554
    DOI: 10.1016/j.apenergy.2020.114943
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261920304554
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2020.114943?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ntsaluba, Sula & Zhu, Bing & Xia, Xiaohua, 2016. "Optimal flow control of a forced circulation solar water heating system with energy storage units and connecting pipes," Renewable Energy, Elsevier, vol. 89(C), pages 108-124.
    2. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    3. Kazmi, Hussain & Mehmood, Fahad & Lodeweyckx, Stefan & Driesen, Johan, 2018. "Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems," Energy, Elsevier, vol. 144(C), pages 159-168.
    4. Ayompe, L.M. & Duffy, A. & Mc Keever, M. & Conlon, M. & McCormack, S.J., 2011. "Comparative field performance study of flat plate and heat pipe evacuated tube collectors (ETCs) for domestic water heating systems in a temperate climate," Energy, Elsevier, vol. 36(5), pages 3370-3378.
    5. Sharma, Amandeep & Kakkar, Ajay, 2018. "Forecasting daily global solar irradiance generation using machine learning," Renewable and Sustainable Energy Reviews, Elsevier, vol. 82(P3), pages 2254-2269.
    6. Xiao Wang & Hongwei Wang & Chao Qi, 2016. "Multi-agent reinforcement learning based maintenance policy for a resource constrained flow line system," Journal of Intelligent Manufacturing, Springer, vol. 27(2), pages 325-333, April.
    7. Shafieian, Abdellah & Khiadani, Mehdi & Nosrati, Ataollah, 2018. "A review of latest developments, progress, and applications of heat pipe solar collectors," Renewable and Sustainable Energy Reviews, Elsevier, vol. 95(C), pages 273-304.
    8. Lu, Renzhi & Hong, Seung Ho & Zhang, Xiongfeng, 2018. "A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach," Applied Energy, Elsevier, vol. 220(C), pages 220-230.
    9. Yang, Lei & Nagy, Zoltan & Goffin, Philippe & Schlueter, Arno, 2015. "Reinforcement learning for optimal control of low exergy buildings," Applied Energy, Elsevier, vol. 156(C), pages 577-586.
    10. Rocchetta, R. & Bellani, L. & Compare, M. & Zio, E. & Patelli, E., 2019. "A reinforcement learning framework for optimal operation and maintenance of power grids," Applied Energy, Elsevier, vol. 241(C), pages 291-301.
    11. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    12. Correa-Jullian, Camila & Cardemil, José Miguel & López Droguett, Enrique & Behzad, Masoud, 2020. "Assessment of Deep Learning techniques for Prognosis of solar thermal systems," Renewable Energy, Elsevier, vol. 145(C), pages 2178-2191.
    13. Sharma, Ashish K. & Sharma, Chandan & Mullick, Subhash C. & Kandpal, Tara C., 2017. "Solar industrial process heating: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 78(C), pages 124-137.
    14. Kalogirou, S.A. & Agathokleous, R. & Barone, G. & Buonomano, A. & Forzano, C. & Palombo, A., 2019. "Development and validation of a new TRNSYS Type for thermosiphon flat-plate solar thermal collectors: energy and economic optimization for hot water production in different climates," Renewable Energy, Elsevier, vol. 136(C), pages 632-644.
    15. Hossain, M.S. & Saidur, R. & Fayaz, H. & Rahim, N.A. & Islam, M.R. & Ahamed, J.U. & Rahman, M.M., 2011. "Review on solar water heater collector and thermal energy performance of circulating pipe," Renewable and Sustainable Energy Reviews, Elsevier, vol. 15(8), pages 3801-3812.
    16. Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
    17. Ge, T.S. & Wang, R.Z. & Xu, Z.Y. & Pan, Q.W. & Du, S. & Chen, X.M. & Ma, T. & Wu, X.N. & Sun, X.L. & Chen, J.F., 2018. "Solar heating and cooling: Present and future development," Renewable Energy, Elsevier, vol. 126(C), pages 1126-1140.
    18. Stephane R. A. Barde & Soumaya Yacout & Hayong Shin, 2019. "Optimal preventive maintenance policy based on reinforcement learning of a fleet of military trucks," Journal of Intelligent Manufacturing, Springer, vol. 30(1), pages 147-161, January.
    19. Bava, Federico & Furbo, Simon, 2017. "Development and validation of a detailed TRNSYS-Matlab model for large solar collector fields for district heating applications," Energy, Elsevier, vol. 135(C), pages 698-708.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Gil, Juan D. & Topa, A. & Álvarez, J.D. & Torres, J.L. & Pérez, M., 2022. "A review from design to control of solar systems for supplying heat in industrial process applications," Renewable and Sustainable Energy Reviews, Elsevier, vol. 163(C).
    2. Zedong Jiao & Xiuli Du & Zhansheng Liu & Liang Liu & Zhe Sun & Guoliang Shi & Ruirui Liu, 2023. "A Review of Theory and Application Development of Intelligent Operation Methods for Large Public Buildings," Sustainability, MDPI, vol. 15(12), pages 1-28, June.
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Zhang, Xiongfeng & Lu, Renzhi & Jiang, Junhui & Hong, Seung Ho & Song, Won Seok, 2021. "Testbed implementation of reinforcement learning-based demand response energy management system," Applied Energy, Elsevier, vol. 297(C).
    5. Chen, Minghao & Xie, Zhiyuan & Sun, Yi & Zheng, Shunlin, 2023. "The predictive management in campus heating system based on deep reinforcement learning and probabilistic heat demands forecasting," Applied Energy, Elsevier, vol. 350(C).
    6. Heidari, Amirreza & Maréchal, François & Khovalyg, Dolaana, 2022. "Reinforcement Learning for proactive operation of residential energy systems by learning stochastic occupant behavior and fluctuating solar energy: Balancing comfort, hygiene and energy use," Applied Energy, Elsevier, vol. 318(C).
    7. Lillo-Bravo, I. & Vera-Medina, J. & Fernandez-Peruchena, C. & Perez-Aparicio, E. & Lopez-Alvarez, J.A. & Delgado-Sanchez, J.M., 2023. "Random Forest model to predict solar water heating system performance," Renewable Energy, Elsevier, vol. 216(C).
    8. Heidari, Amirreza & Maréchal, François & Khovalyg, Dolaana, 2022. "An occupant-centric control framework for balancing comfort, energy use and hygiene in hot water systems: A model-free reinforcement learning approach," Applied Energy, Elsevier, vol. 312(C).
    9. Zhou, Xin & Tian, Shuai & An, Jingjing & Yan, Da & Zhang, Lun & Yang, Junyan, 2022. "Modeling occupant behavior’s influence on the energy efficiency of solar domestic hot water systems," Applied Energy, Elsevier, vol. 309(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    2. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    3. Zhang, Xiongfeng & Lu, Renzhi & Jiang, Junhui & Hong, Seung Ho & Song, Won Seok, 2021. "Testbed implementation of reinforcement learning-based demand response energy management system," Applied Energy, Elsevier, vol. 297(C).
    4. Rocchetta, R. & Bellani, L. & Compare, M. & Zio, E. & Patelli, E., 2019. "A reinforcement learning framework for optimal operation and maintenance of power grids," Applied Energy, Elsevier, vol. 241(C), pages 291-301.
    5. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    6. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    7. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    8. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    9. Sabarathinam Srinivasan & Suresh Kumarasamy & Zacharias E. Andreadakis & Pedro G. Lind, 2023. "Artificial Intelligence and Mathematical Models of Power Grids Driven by Renewable Energy Sources: A Survey," Energies, MDPI, vol. 16(14), pages 1-56, July.
    10. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    11. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
    12. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    13. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    14. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    15. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    16. Grace Muriithi & Sunetra Chowdhury, 2021. "Optimal Energy Management of a Grid-Tied Solar PV-Battery Microgrid: A Reinforcement Learning Approach," Energies, MDPI, vol. 14(9), pages 1-24, May.
    17. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    18. Lilia Tightiz & Joon Yoo, 2022. "A Review on a Data-Driven Microgrid Management System Integrating an Active Distribution Network: Challenges, Issues, and New Trends," Energies, MDPI, vol. 15(22), pages 1-24, November.
    19. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    20. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:268:y:2020:i:c:s0306261920304554. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.