IDEAS home Printed from https://ideas.repec.org/a/aif/journl/v5y2021i3p174-189.html
   My bibliography  Save this article

Reinforcement Learning and Modeling Techniques: A Review

Author

Listed:
  • Hindreen Rashid Abdulqadir

    (Information Technology Department, Akre Technical College of Informatics, Duhok Polytechnic University, Duhok Kurdistan Region, Iraq.)

  • Adnan Mohsin Abdulazeez

    (Duhok Polytechnic University, Duhok, Kurdistan Region, Iraq)

Abstract

The Reinforcement learning (RL) algorithms solve a wide range of problems we faced. The topic of RL has achieved a new, complete standard of public opinion. High difficulty in large-scale real-world implementations is the effective use of large data sets previously obtained in augmented learning algorithms. Q-learning (QL), by learning a conservative Q function that allows a policy to be below the predicted value of the Q function, is introduced by us, which aims to circumvent these restrictions. We revealed technical reinforcement learning in this study. In principle, we demonstrate that QL creates a lower relation to current policy importance and that this can be correlated with guarantees of political learning theoretical change. In reality, QL strengthens the benchmark objective with a simple, standardized Q value which, in addition to existing Q-learning and essential applications, is quickly applied. The findings indicate that all algorithms are needed to learn how to play successfully. In comparison, all dual Q-learning variables have a significantly higher score compared with Q-learning, and the incremental reward function shows no improved effects than the normal reward function. We present an attack mechanism that uses the portability of competing tests to execute policy incentives and to prove their usefulness and consequences by means of a pilot study of a play learning scenario.

Suggested Citation

  • Hindreen Rashid Abdulqadir & Adnan Mohsin Abdulazeez, 2021. "Reinforcement Learning and Modeling Techniques: A Review," International Journal of Science and Business, IJSAB International, vol. 5(3), pages 174-189.
  • Handle: RePEc:aif:journl:v:5:y:2021:i:3:p:174-189
    as

    Download full text from publisher

    File URL: https://ijsab.com/wp-content/uploads/696.pdf
    Download Restriction: no

    File URL: https://ijsab.com/volume-5-issue-3/3735
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    2. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    3. Chang, Soowon & Saha, Nirvik & Castro-Lacouture, Daniel & Yang, Perry Pei-Ju, 2019. "Multivariate relationships between campus design parameters and energy performance using reinforcement learning and parametric modeling," Applied Energy, Elsevier, vol. 249(C), pages 253-264.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    2. Langer, Lissy & Volling, Thomas, 2022. "A reinforcement learning approach to home energy management for modulating heat pumps and photovoltaic systems," Applied Energy, Elsevier, vol. 327(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    5. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    6. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    7. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    8. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    9. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    10. Michael Bachseitz & Muhammad Sheryar & David Schmitt & Thorsten Summ & Christoph Trinkl & Wilfried Zörner, 2024. "PV-Optimized Heat Pump Control in Multi-Family Buildings Using a Reinforcement Learning Approach," Energies, MDPI, vol. 17(8), pages 1-16, April.
    11. Lilia Tightiz & Joon Yoo, 2022. "A Review on a Data-Driven Microgrid Management System Integrating an Active Distribution Network: Challenges, Issues, and New Trends," Energies, MDPI, vol. 15(22), pages 1-24, November.
    12. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    13. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    14. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    15. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    16. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    17. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
    18. Sun, Fangyuan & Kong, Xiangyu & Wu, Jianzhong & Gao, Bixuan & Chen, Ke & Lu, Ning, 2022. "DSM pricing method based on A3C and LSTM under cloud-edge environment," Applied Energy, Elsevier, vol. 315(C).
    19. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    20. Kim, Sunwoo & Choi, Yechan & Park, Joungho & Adams, Derrick & Heo, Seongmin & Lee, Jay H., 2024. "Multi-period, multi-timescale stochastic optimization model for simultaneous capacity investment and energy management decisions for hybrid Micro-Grids with green hydrogen production under uncertainty," Renewable and Sustainable Energy Reviews, Elsevier, vol. 190(PA).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:aif:journl:v:5:y:2021:i:3:p:174-189. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Farjana Rahman (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.