Author
Listed:
- Hyeonseong Choi
- Byung Hyun Lee
- Se Young Chun
- Jaehwan Lee
Abstract
Modern deep neural networks cannot be often trained on a single GPU due to large model size and large data size. Model parallelism splits a model for multiple GPUs, but making it scalable and seamless is challenging due to different information sharing among GPUs with communication overhead. Specifically, we identify two key issues to make the parallelism being inefficient and inaccurate; an efficient pipelining technique is crucial to maximize GPU utilization and normalizations in deep neural networks may affect the performance due to different statistics sharing of mini-batch. In this work, we address these issues by investigating efficient pipelining for model parallelism and effective normalizations in model / data parallelisms when training a model with large mini-batch in multiple GPUs so that the model performance in accuracy can not be compromised. Firstly, we propose a novel method to search for an optimal micro-batch size considering the number of GPUs and memory size for model parallelism. For efficient pipelining, mini-batch is usually divided into smaller batches (called micro-batch). To maximize the utilization of GPU computing resources, training should be performed with the optimal micro-batch size. Our proposed micro-batch size search algorithm achieved increased image throughput by up to 12% and improved trainable mini-batch size by 25% as compared to the conventional model parallelism method. Secondly, we investigate normalizations in distributed deep learning training for different parallelisms. Our experiments using different normalization methods suggested that the performance with batch normalization can be improved by sharing the batch information among GPUs when performing data parallelism. It was also confirmed that group normalization helped minimizing accuracy degradation when performing model parallelism with pipelining and yielded consistent accuracies for diverse mini-batch sizes.
Suggested Citation
Hyeonseong Choi & Byung Hyun Lee & Se Young Chun & Jaehwan Lee, 2023.
"Towards accelerating model parallelism in distributed deep learning systems,"
PLOS ONE, Public Library of Science, vol. 18(11), pages 1-15, November.
Handle:
RePEc:plo:pone00:0293338
DOI: 10.1371/journal.pone.0293338
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0293338. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.