Paper published in a book (Scientific congresses, symposiums and conference proceedings)
Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy
THANAPOL, Panissara; LAVANGNANANDA, Kittichai; LEPREVOST, Franck et al.
2023In Nguyen, Ngoc Thanh; Hnatkowska, Bogumiła (Eds.) Intelligent Information and Database Systems - 15th Asian Conference, ACIIDS 2023, Proceedings
Peer reviewed
 

Files


Full Text
Scheduling_Deep_Learning_Training_in_GPU_Cluster_Using_the_Model_similarity_based_Policy.pdf
Author postprint (937.32 kB)
Request a copy

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Deep learning; Distributed Training; GPU Cluster; Scheduling; Scheduling Policy; Similarity Measurement; Distributed training; Graphic processing unit cluster; Learning models; Learning technology; Neural-networks; Performance; Scheduling policies; Similarity measurements; Theoretical Computer Science; Computer Science (all)
Abstract :
[en] Training large neural networks with huge amount of data using multiple Graphic Processing Units (GPUs) became widespread with the emergence of Deep Learning (DL) technology. It is usually operated in datacenters featuring multiple GPU clusters, which are shared amongst users. However, different GPU architectures co-exist on the market and differ in training performance. To maximise the utilisation of a GPU cluster, the scheduler plays an important role in managing the resources by dispatching the jobs to the GPUs. An efficient scheduling strategy should take into account that the training performance of each GPU architecture varies for the different DL models. In this work, an original model-similarity-based scheduling policy is introduced that takes into account the GPU architectures that match with the DL models. The results show that using the model-similarity-based scheduling policy for distributed training across multiple GPUs of a DL model with a large batch size can reduce the makespan.
Research center :
ULHPC - University of Luxembourg: High Performance Computing
Disciplines :
Computer science
Author, co-author :
THANAPOL, Panissara ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
LAVANGNANANDA, Kittichai ;  University of Luxembourg ; School of Information Technology, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand
LEPREVOST, Franck ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
SCHLEICH, Julien  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
BOUVRY, Pascal ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
External co-authors :
yes
Language :
English
Title :
Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy
Publication date :
2023
Event name :
ACIIDS 2023: Intelligent Information and Database Systems
Event place :
Phuket, Thailand
Event date :
24-07-2023 => 26-07-2023
Audience :
International
Main work title :
Intelligent Information and Database Systems - 15th Asian Conference, ACIIDS 2023, Proceedings
Editor :
Nguyen, Ngoc Thanh
Hnatkowska, Bogumiła
Publisher :
Springer Science and Business Media Deutschland GmbH
ISBN/EAN :
9789819958368
Peer reviewed :
Peer reviewed
Focus Area :
Computational Sciences
Funding text :
The authors are grateful for Grid’5000, which provides computing resources throughout this research.
Available on ORBilu :
since 21 November 2023

Statistics


Number of views
47 (12 by Unilu)
Number of downloads
3 (3 by Unilu)

Scopus citations®
 
1
Scopus citations®
without self-citations
0
OpenAlex citations
 
1

Bibliography


Similar publications



Contact ORBilu