IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i15p3364-d1208342.html
   My bibliography  Save this article

Task Scheduling Mechanism Based on Reinforcement Learning in Cloud Computing

Author

Listed:
  • Yugui Wang

    (School of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
    School of Information Engineering, Nanjing Polytechnic Institute, Nanjing 210048, China)

  • Shizhong Dong

    (Wuhan Academy of Social Sciences, Wuhan 430019, China)

  • Weibei Fan

    (School of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210003, China)

Abstract

The explosive growth of users and applications in IoT environments has promoted the development of cloud computing. In the cloud computing environment, task scheduling plays a crucial role in optimizing resource utilization and improving overall performance. However, effective task scheduling remains a key challenge. Traditional task scheduling algorithms often rely on static heuristics or manual configuration, limiting their adaptability and efficiency. To overcome these limitations, there is increasing interest in applying reinforcement learning techniques for dynamic and intelligent task scheduling in cloud computing. How can reinforcement learning be applied to task scheduling in cloud computing? What are the benefits of using reinforcement learning-based methods compared to traditional scheduling mechanisms? How does reinforcement learning optimize resource allocation and improve overall efficiency? Addressing these questions, in this paper, we propose a Q-learning-based Multi-Task Scheduling Framework (QMTSF). This framework consists of two stages: First, tasks are dynamically allocated to suitable servers in the cloud environment based on the type of servers. Second, an improved Q-learning algorithm called UCB-based Q-Reinforcement Learning (UQRL) is used on each server to assign tasks to a Virtual Machine (VM). The agent makes intelligent decisions based on past experiences and interactions with the environment. In addition, the agent learns from rewards and punishments to formulate the optimal task allocation strategy and schedule tasks on different VMs. The goal is to minimize the total makespan and average processing time of tasks while ensuring task deadlines. We conducted simulation experiments to evaluate the performance of the proposed mechanism compared to traditional scheduling methods such as Particle Swarm Optimization (PSO), random, and Round-Robin (RR). The experimental results demonstrate that the proposed QMTSF scheduling framework outperforms other scheduling mechanisms in terms of the makespan and average task processing time.

Suggested Citation

  • Yugui Wang & Shizhong Dong & Weibei Fan, 2023. "Task Scheduling Mechanism Based on Reinforcement Learning in Cloud Computing," Mathematics, MDPI, vol. 11(15), pages 1-17, August.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:15:p:3364-:d:1208342
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/15/3364/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/15/3364/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:15:p:3364-:d:1208342. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.