IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v16y2024i6p181-d1398494.html
   My bibliography  Save this article

MADDPG-Based Offloading Strategy for Timing-Dependent Tasks in Edge Computing

Author

Listed:
  • Yuchen Wang

    (School of Information and Electrical Engineering, Hebei University of Engineering, Handan 056038, China
    Hebei Key Laboratory of Security and Protection Information Sensing and Processing, Handan 056038, China)

  • Zishan Huang

    (School of Information and Electrical Engineering, Hebei University of Engineering, Handan 056038, China
    Hebei Key Laboratory of Security and Protection Information Sensing and Processing, Handan 056038, China)

  • Zhongcheng Wei

    (School of Information and Electrical Engineering, Hebei University of Engineering, Handan 056038, China
    Hebei Key Laboratory of Security and Protection Information Sensing and Processing, Handan 056038, China)

  • Jijun Zhao

    (School of Information and Electrical Engineering, Hebei University of Engineering, Handan 056038, China
    Hebei Key Laboratory of Security and Protection Information Sensing and Processing, Handan 056038, China)

Abstract

With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application service applications from multiple users, this paper proposes an improved edge computing timing-dependent, task-offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) that aims to shorten the offloading delay and improve the resource utilization rate by means of resource prediction and collaboration among multiple agents to shorten the offloading delay and improve the resource utilization. First, to coordinate the global computing resource, the gated recurrent unit is utilized, which predicts the next computing resource requirements of the timing-dependent tasks according to historical information. Second, the predicted information, the historical offloading decisions and the current state are used as inputs, and the training process of the reinforcement learning algorithm is improved to propose a task-offloading algorithm based on MADDPG. The simulation results show that the algorithm reduces the response latency by 6.7% and improves the resource utilization by 30.6% compared with the suboptimal benchmark algorithm, and it reduces nearly 500 training rounds during the learning process, which effectively improves the timeliness of the offloading strategy.

Suggested Citation

  • Yuchen Wang & Zishan Huang & Zhongcheng Wei & Jijun Zhao, 2024. "MADDPG-Based Offloading Strategy for Timing-Dependent Tasks in Edge Computing," Future Internet, MDPI, vol. 16(6), pages 1-20, May.
  • Handle: RePEc:gam:jftint:v:16:y:2024:i:6:p:181-:d:1398494
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/16/6/181/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/16/6/181/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Siyu Gao & Yuchen Wang & Nan Feng & Zhongcheng Wei & Jijun Zhao, 2023. "Deep Reinforcement Learning-Based Video Offloading and Resource Allocation in NOMA-Enabled Networks," Future Internet, MDPI, vol. 15(5), pages 1-19, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Qianqian Wu & Qiang Liu & Zefan Wu & Jiye Zhang, 2023. "Maximizing UAV Coverage in Maritime Wireless Networks: A Multiagent Reinforcement Learning Approach," Future Internet, MDPI, vol. 15(11), pages 1-19, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:6:p:181-:d:1398494. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.