IDEAS home Printed from https://ideas.repec.org/a/taf/tsysxx/v56y2025i14p3528-3557.html
   My bibliography  Save this article

Exploring reinforcement learning in process control: a comprehensive survey

Author

Listed:
  • N. Rajasekhar
  • T.K. Radhakrishnan
  • N. Samsudeen

Abstract

Reinforcement Learning (RL) is a machine learning methodology that develops the capability to make sequential decisions in intricate issues using trial-and-error techniques. RL has become increasingly prevalent for decision-making and control tasks in diverse fields such as industrial processes, biochemical systems and energy management. This review paper presents a comprehensive examination of the development, models, algorithms and practical uses of RL, with a specific emphasis on its application in process control. The study examines the fundamental theories, methodology and applications of RL, classifying them into two categories: classical RL such as such as Markov decision processes (MDP) and deep RL viz., actor critic methods. RL is a topic of discussion in multiple process industries, such as industrial chemical process control, biochemical process control, energy systems, wastewater treatment and the oil and gas sector. Nevertheless, the paper also highlights challenges that hinder its larger acceptance, including the requirement for substantial computational resources, the complexity of simulating real-world settings and the challenge of guaranteeing the stability and resilience of RL algorithms in dynamic and unpredictable environments. RL has demonstrated significant promise, but more research is needed to fully integrate it into industrial and environmental systems in order to solve the current challenges.Abbreviations: AC: Actor critic; AI: Artificial intelligence; ANN: Artificial neural networks; A3C: Asynchronous advantage actor critic; CRL : Classical Reinforcement learning; CV : Controlled variable; DDPG : Deep deterministic policy gradient; DQN: Deep Q network; DRL: Deep reinforcement learning; DP: Dynamic programming; FOMDP: Fully observable Markov decision process; GRU: Gated recurrent unit; LQR: Linear quadratic regulator; LSTM: Long short-term memory; ML: Machine learning; MV : Manipulated variable; MC: Monte Carlo; MDP: Markov decision process; MPC: Model predictive controller; MIMO: Multi input multi output; PG: Policy gradient; PID: Proportional integral derivative; PPO: Proximal policy optimisation; RL: Reinforcement learning; PPO: Proximal policy optimisation; SAC: Soft actor critic; SISO: Single input single output; TD: Temporal difference; TRPO: Trust region policy optimisation; TD3: Twin delayed deep deterministic policy gradient.

Suggested Citation

  • N. Rajasekhar & T.K. Radhakrishnan & N. Samsudeen, 2025. "Exploring reinforcement learning in process control: a comprehensive survey," International Journal of Systems Science, Taylor & Francis Journals, vol. 56(14), pages 3528-3557, October.
  • Handle: RePEc:taf:tsysxx:v:56:y:2025:i:14:p:3528-3557
    DOI: 10.1080/00207721.2025.2469821
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/00207721.2025.2469821
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/00207721.2025.2469821?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:tsysxx:v:56:y:2025:i:14:p:3528-3557. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/TSYS20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.