IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i20p4269-d1258795.html
   My bibliography  Save this article

Optimization of Task-Scheduling Strategy in Edge Kubernetes Clusters Based on Deep Reinforcement Learning

Author

Listed:
  • Xin Wang

    (School of Electrical & Information Engineering, Hunan University of Technology, Zhuzhou 412007, China)

  • Kai Zhao

    (College of Railway Transportation, Hunan University of Technology, Zhuzhou 412007, China)

  • Bin Qin

    (School of Electrical & Information Engineering, Hunan University of Technology, Zhuzhou 412007, China)

Abstract

Kubernetes, known for its versatility in infrastructure management, rapid scalability, and ease of deployment, makes it an excellent platform for edge computing. However, its native scheduling algorithm struggles with load balancing, especially during peak task deployment in edge environments characterized by resource limitations and low latency demands. To address this issue, a proximal policy optimization with the least response time (PPO-LRT) algorithm was proposed in this paper. This deep reinforcement learning approach learns the pod-scheduling process, which can adaptively schedule edge tasks to the most suitable worker nodes with the shortest response time according to the current cluster load and pod state. To evaluate the effectiveness of the proposed algorithm, multiple virtual machines were created, and we built a heterogeneous node cluster. Additionally, we deployed k3s, a Kubernetes distribution suited for edge environments, on the cluster. The load balancing, high load resilience, and average response time during peak task deployment were tested by initiating numerous tasks within a limited time frame. The results validate that the PPO-LRT-based scheduler shows superior performance in cluster load balancing compared to the Kube scheduler. After the deployment of 500 random tasks, several cluster nodes become overwhelmed by using the Kube scheduler, whereas the PPO-LRT-based scheduler evenly allocates the workload across the cluster, reducing the average response time by approximately 31%.

Suggested Citation

  • Xin Wang & Kai Zhao & Bin Qin, 2023. "Optimization of Task-Scheduling Strategy in Edge Kubernetes Clusters Based on Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(20), pages 1-20, October.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:20:p:4269-:d:1258795
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/20/4269/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/20/4269/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:20:p:4269-:d:1258795. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.