IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v12y2019i4p646-d206683.html
   My bibliography  Save this article

Energy Aware Virtual Machine Scheduling in Data Centers

Author

Listed:
  • Yeliang Qiu

    (Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China
    School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China)

  • Congfeng Jiang

    (Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China
    School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China)

  • Yumei Wang

    (Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China
    School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China)

  • Dongyang Ou

    (Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China
    School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China)

  • Youhuizi Li

    (Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China
    School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China)

  • Jian Wan

    (Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China
    School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China)

Abstract

Power consumption is a primary concern in modern servers and data centers. Due to varying in workload types and intensities, different servers may have a different energy efficiency (EE) and energy proportionality (EP) even while having the same hardware configuration (i.e., central processing unit (CPU) generation and memory installation). For example, CPU frequency scaling and memory modules voltage scaling can significantly affect the server’s energy efficiency. In conventional virtualized data centers, the virtual machine (VM) scheduler packs VMs to servers until they saturate, without considering their energy efficiency and EP differences. In this paper we propose EASE, the Energy efficiency and proportionality Aware VM SchEduling framework containing data collection and scheduling algorithms. In the EASE framework, each server’s energy efficiency and EP characteristics are first identified by executing customized computing intensive, memory intensive, and hybrid benchmarks. Servers will be labelled and categorized with their affinity for different incoming requests according to their EP and EE characteristics. Then for each VM, EASE will undergo workload characterization procedure by tracing and monitoring their resource usage including CPU, memory, disk, and network and determine whether it is computing intensive, memory intensive, or a hybrid workload. Finally, EASE schedules VMs to servers by matching the VM’s workload type and the server’s EP and EE preference. The rationale of EASE is to schedule VMs to servers to keep them working around their peak energy efficiency point, i.e., the near optimal working range. When workload fluctuates, EASE re-schedules or migrates VMs to other servers to make sure that all the servers are running as near their optimal working range as they possibly can. The experimental results on real clusters show that EASE can save servers’ power consumption as much as 37.07%–49.98% in both homogeneous and heterogeneous clusters, while the average completion time of the computing intensive VMs increases only 0.31%–8.49%. In the heterogeneous nodes, the power consumption of the computing intensive VMs can be reduced by 44.22%. The job completion time can be saved by 53.80%.

Suggested Citation

  • Yeliang Qiu & Congfeng Jiang & Yumei Wang & Dongyang Ou & Youhuizi Li & Jian Wan, 2019. "Energy Aware Virtual Machine Scheduling in Data Centers," Energies, MDPI, vol. 12(4), pages 1-21, February.
  • Handle: RePEc:gam:jeners:v:12:y:2019:i:4:p:646-:d:206683
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/12/4/646/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/12/4/646/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Luca Chiaraviglio & Antonio Cianfrani & Marco Listanti & William Liu & Marco Polverini, 2016. "Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation," Energies, MDPI, vol. 9(6), pages 1-17, June.
    2. Yan Bai & Lijun Gu & Xiao Qi, 2018. "Comparative Study of Energy Performance between Chip and Inlet Temperature-Aware Workload Allocation in Air-Cooled Data Center," Energies, MDPI, vol. 11(3), pages 1-23, March.
    3. Emelie Wibron & Anna-Lena Ljung & T. Staffan Lundström, 2018. "Computational Fluid Dynamics Modeling and Validating Experiments of Airflow in a Data Center," Energies, MDPI, vol. 11(3), pages 1-15, March.
    4. Xiao-Fang Liu & Zhi-Hui Zhan & Jun Zhang, 2017. "An Energy Aware Unified Ant Colony System for Dynamic Virtual Machine Placement in Cloud Computing," Energies, MDPI, vol. 10(5), pages 1-15, May.
    5. Saima Zafar & Shafique Ahmad Chaudhry & Sara Kiran, 2016. "Adaptive TrimTree: Green Data Center Networks through Resource Consolidation, Selective Connectedness and Energy Proportional Computing," Energies, MDPI, vol. 9(10), pages 1-17, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhiling Guo & Jin Li & Ram Ramesh, 2023. "Green Data Analytics of Supercomputing from Massive Sensor Networks: Does Workload Distribution Matter?," Information Systems Research, INFORMS, vol. 34(4), pages 1664-1685, December.
    2. Kaiqiang Zhang & Dongyang Ou & Congfeng Jiang & Yeliang Qiu & Longchuan Yan, 2021. "Power and Performance Evaluation of Memory-Intensive Applications," Energies, MDPI, vol. 14(14), pages 1-20, July.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Pio Alessandro Lombardi & Kranthi Ranadheer Moreddy & André Naumann & Przemyslaw Komarnicki & Carmine Rodio & Sergio Bruno, 2019. "Data Centers as Active Multi-Energy Systems for Power Grid Decarbonization: A Technical and Economic Analysis," Energies, MDPI, vol. 12(21), pages 1-14, November.
    2. Jin, Chaoqiang & Bai, Xuelian & Yang, Chao & Mao, Wangxin & Xu, Xin, 2020. "A review of power consumption models of servers in data centers," Applied Energy, Elsevier, vol. 265(C).
    3. Emelie Wibron & Anna-Lena Ljung & T. Staffan Lundström, 2019. "Comparing Performance Metrics of Partial Aisle Containments in Hard Floor and Raised Floor Data Centers Using CFD," Energies, MDPI, vol. 12(8), pages 1-17, April.
    4. Chu, Wen-Xiao & Wang, Chi-Chuan, 2019. "A review on airflow management in data centers," Applied Energy, Elsevier, vol. 240(C), pages 84-119.
    5. Isazadeh, Amin & Ziviani, Davide & Claridge, David E., 2023. "Global trends, performance metrics, and energy reduction measures in datacom facilities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 174(C).
    6. Gupta, Rohit & Asgari, Sahar & Moazamigoodarzi, Hosein & Down, Douglas G. & Puri, Ishwar K., 2021. "Energy, exergy and computing efficiency based data center workload and cooling management," Applied Energy, Elsevier, vol. 299(C).
    7. Maria Avgerinou & Paolo Bertoldi & Luca Castellazzi, 2017. "Trends in Data Centre Energy Consumption under the European Code of Conduct for Data Centre Energy Efficiency," Energies, MDPI, vol. 10(10), pages 1-18, September.
    8. Cho, Jinkyun & Kim, Youngmo, 2021. "Development of modular air containment system: Thermal performance optimization of row-based cooling for high-density data centers," Energy, Elsevier, vol. 231(C).
    9. Rickard Brännvall & Jonas Gustafsson & Fredrik Sandin, 2023. "Modular and Transferable Machine Learning for Heat Management and Reuse in Edge Data Centers," Energies, MDPI, vol. 16(5), pages 1-24, February.
    10. Jinkyun Cho & Jesang Woo & Beungyong Park & Taesub Lim, 2020. "A Comparative CFD Study of Two Air Distribution Systems with Hot Aisle Containment in High-Density Data Centers," Energies, MDPI, vol. 13(22), pages 1-19, November.
    11. S. H. Alsamhi & Ou Ma & Mohd. Samar Ansari & Qingliang Meng, 2019. "Greening internet of things for greener and smarter cities: a survey and future prospects," Telecommunication Systems: Modelling, Analysis, Design and Management, Springer, vol. 72(4), pages 609-632, December.
    12. Xiao-Fang Liu & Zhi-Hui Zhan & Jun Zhang, 2017. "An Energy Aware Unified Ant Colony System for Dynamic Virtual Machine Placement in Cloud Computing," Energies, MDPI, vol. 10(5), pages 1-15, May.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:12:y:2019:i:4:p:646-:d:206683. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.