Author
Listed:
- Jing Zou
(State Grid Economic and Technological Research Institute Ltd., Beijing 221005, China)
- Peizhe Xin
(State Grid Economic and Technological Research Institute Ltd., Beijing 221005, China)
- Chang Wang
(School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China)
- Heli Zhang
(School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China)
- Lei Wei
(Information and Telecommunication Branch, State Grid Jiangsu Electric Power Ltd., Nanjing 211103, China)
- Ying Wang
(School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China)
Abstract
Massive computational resources are required by a booming number of artificial intelligence (AI) services in the communication network of the smart grid. To alleviate the computational pressure on data centers, edge computing first network (ECFN) can serve as an effective solution to realize distributed model training based on data parallelism for AI services in smart grid. Due to AI services with diversified types, an edge data center has a changing workload in different time periods. Selfish edge data centers from different edge suppliers are reluctant to share their computing resources without a rule for fair competition. AI services-oriented dynamic computational resource scheduling of edge data centers affects both the economic profit of AI service providers and computational resource utilization. This letter mainly discusses the partition and distribution of AI data based on distributed model training and dynamic computational resource scheduling problems among multiple edge data centers for AI services. To this end, a mixed integer linear programming (MILP) model and a Deep Reinforcement Learning (DRL)-based algorithm are proposed. Simulation results show that the proposed DRL-based algorithm outperforms the benchmark in terms of profit of AI service provider, backlog of distributed model training tasks, running time and multi-objective optimization.
Suggested Citation
Jing Zou & Peizhe Xin & Chang Wang & Heli Zhang & Lei Wei & Ying Wang, 2024.
"AI Services-Oriented Dynamic Computing Resource Scheduling Algorithm Based on Distributed Data Parallelism in Edge Computing Network of Smart Grid,"
Future Internet, MDPI, vol. 16(9), pages 1-14, August.
Handle:
RePEc:gam:jftint:v:16:y:2024:i:9:p:312-:d:1466214
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:9:p:312-:d:1466214. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.