2023 • In Nguyen, Ngoc Thanh; Hnatkowska, Bogumiła (Eds.) Intelligent Information and Database Systems - 15th Asian Conference, ACIIDS 2023, Proceedings
[en] Training large neural networks with huge amount of data using multiple Graphic Processing Units (GPUs) became widespread with the emergence of Deep Learning (DL) technology. It is usually operated in datacenters featuring multiple GPU clusters, which are shared amongst users. However, different GPU architectures co-exist on the market and differ in training performance. To maximise the utilisation of a GPU cluster, the scheduler plays an important role in managing the resources by dispatching the jobs to the GPUs. An efficient scheduling strategy should take into account that the training performance of each GPU architecture varies for the different DL models. In this work, an original model-similarity-based scheduling policy is introduced that takes into account the GPU architectures that match with the DL models. The results show that using the model-similarity-based scheduling policy for distributed training across multiple GPUs of a DL model with a large batch size can reduce the makespan.
Research center :
ULHPC - University of Luxembourg: High Performance Computing
Disciplines :
Computer science
Author, co-author :
THANAPOL, Panissara ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
LAVANGNANANDA, Kittichai ; University of Luxembourg ; School of Information Technology, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand
LEPREVOST, Franck ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
SCHLEICH, Julien ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
BOUVRY, Pascal ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
External co-authors :
yes
Language :
English
Title :
Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy
Publication date :
2023
Event name :
ACIIDS 2023: Intelligent Information and Database Systems
Event place :
Phuket, Thailand
Event date :
24-07-2023 => 26-07-2023
Audience :
International
Main work title :
Intelligent Information and Database Systems - 15th Asian Conference, ACIIDS 2023, Proceedings
Editor :
Nguyen, Ngoc Thanh
Hnatkowska, Bogumiła
Publisher :
Springer Science and Business Media Deutschland GmbH
Albahar, H., Dongare, S., Du, Y., Zhao, N., Paul, A.K., Butt, A.R.: Schedtune: a heterogeneity-aware gpu scheduler for deep learning. In: 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pp. 695–705 (2022). https://doi.org/10.1109/CCGrid54584.2022.00079
Amazon web services inc.: deep learning AMI: Developer guide. Technical Report (2022)
Chaudhary, S., Ramjee, R., Sivathanu, M., Kwatra, N., Viswanatha, S.: Balancing efficiency and fairness in heterogeneous GPU clusters for deep learning. In: The Fifteenth European Conference on Computer Systems, pp. 1–16 (2020)
Chollet, F.: Deep learning with Python. Manning Publications, Shelter Island (2017)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
Goyal, P., et al.: Accurate, large minibatch sgd: Training imagenet in 1 hour. Technical Report (2017)
Gu, J., et al.: Tiresias: A GPU cluster manager for distributed deep learning. In: 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19), pp. 485–500 (2019)
Han, J., Kamber, M., Pei, J.: Data Mining, pp. 39–82. The Morgan Kaufmann Series in Data Management Systems, Morgan Kaufmann, Boston, 3 edn. (2012). https://doi.org/10.1016/B978-0-12-381479-1.00002-2
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Karen, S., Andrew, Z.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical Report (2009)
Margery, D., Morel, E., Nussbaum, L., Richard, O., Rohr, C.: Resources description, selection, reservation and verification on a large-scale testbed. In: TRIDENTCOM-9th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities (2014)
Mattson, P., et al.: Mlperf training benchmark. Proc. Mach. Learn. Syst. 2, 336– 349 (2020)
Narayanan, D., Santhanam, K., Kazhamiaka, F., Phanishayee, A., Zaharia, M.: Heterogeneity-aware cluster scheduling policies for deep learning workloads. In: 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 481–498. USENIX Association (2020)
Peng, Y., Bao, Y., Chen, Y., Wu, C., Guo, C.: Optimus: an efficient dynamic resource scheduler for deep learning clusters. In: The Thirteenth EuroSys Conference, pp. 1–14 (2018). https://doi.org/10.1145/3190508.3190517
Peng, Y., Bao, Y., Chen, Y., Wu, C., Meng, C., Lin, W.: Dl2: A deep learningdriven scheduler for deep learning clusters. IEEE Trans. Parallel Distrib. Syst. 32(8), 1947–1960 (2021)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: Inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018). https://doi.org/10. 1109/CVPR.2018.00474
Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)
Torres, J.: Train a neural network on multi-gpu with tensorflow-supercomputing for artificial intelligence, March 2023. https://towardsdatascience.com/traina-neural-network-on-multi-gpu-with-tensorflow-42fa5f51b8af
Xiao, W., et al.: Gandiva: introspective cluster scheduling for deep learning. In: 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pp. 595–610 (2018)
Xiao, W., et al.: Antman: Dynamic scaling on GPU clusters for deep learning. In: 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 533–548. USENIX Association, November 2020. https://www. usenix.org/conference/osdi20/presentation/xiao
Yeung, G., Borowiec, D., Yang, R., Friday, A., Harper, R., Garraghan, P.: Horus: Interference-aware and prediction-based scheduling in deep learning systems. IEEE Trans. Parallel Distrib. Syst. 33(1), 88–100 (2022). https://doi.org/10.1109/TPDS. 2021.3079202
Yu, G.X., Gao, Y., Golikov, P., Pekhimenko, G.: Habitat: a runtime-based computational performance predictor for deep neural network training. In: 2021 USENIX Annual Technical Conference (USENIX ATC 21), pp. 503–521. USENIX Association (2021)
Zhang, H., Stafman, L., Or, A., Freedman, M.J.: Slaq: quality-driven scheduling for distributed machine learning. In: The 2017 Symposium on Cloud Computing, pp. 390–404 (2017)