Communication publiée dans un ouvrage (Colloques, congrès, conférences scientifiques et actes)
Optimizing the Resource and Job Management System of an Academic HPC and Research Computing Facility
VARRETTE, Sébastien; KIEFFER, Emmanuel; Pinel, Frederic
2022In 21st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC'22)
Peer reviewed
 

Documents


Texte intégral
2022-06-28_Camera-ready-IEEE-PDF-Express_ispdc22.pdf
Preprint Auteur (2.05 MB)
Demander un accès
Annexes
slides_ispdc2022.pdf
(2.88 MB)
Slides for the conference
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
Slurm; Fairsharing; HPC
Résumé :
[en] High Performance Computing (HPC) is nowadays a strategic asset required to sustain the surging demands for massive processing and data-analytic capabilities. In practice, the effective management of such large scale and distributed computing infrastructures is left to a Resource and Job Management System (RJMS). This essential middleware component is responsible for managing the computing resources, handling user requests to allocate resources while providing an optimized framework for starting, executing and monitoring jobs on the allocated resources. The University of Luxembourg has been operating for 15 years a large academic HPC facility which relies since 2017 on the Slurm RJMS introduced on top of the flagship cluster Iris. The acquisition of a new liquid-cooled supercomputer named Aion which was released in 2021 was the occasion to deeply review and optimize the seminal Slurm configuration, the resource limits defined and the sustaining fairsharing algorithm. This paper presents the outcomes of this study and details the implemented RJMS policy. The impact of the decisions made over the supercomputers workloads is also described. In particular, the performance evaluation conducted highlights that when compared to the seminal configuration, the described and implemented environment brought concrete and measurable improvements with regards the platform utilization (+12.64%), the jobs efficiency (as measured by the average Wall-time Request Accuracy, improved by 110.81%) or the management and funding (increased by 10%). The systems demonstrated sustainable and scalable HPC performances, and this effort has led to a negligible penalty on the average slowdown metric (response time normalized by runtime), which was increased by 0.59% for job workloads covering a complete year of exercise. Overall, this new setup has been in production for 18 months on both supercomputers and the updated model proves to bring a fairer and more satisfying experience to the end users. The proposed configurations and policies may help other HPC centres when designing or improving the RJMS sustaining the job scheduling strategy at the advent of computing capacity expansions.
Centre de recherche :
ULHPC - University of Luxembourg: High Performance Computing
Disciplines :
Sciences informatiques
Auteur, co-auteur :
VARRETTE, Sébastien ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
KIEFFER, Emmanuel ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Pinel, Frederic
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
Optimizing the Resource and Job Management System of an Academic HPC and Research Computing Facility
Date de publication/diffusion :
juillet 2022
Nom de la manifestation :
21st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC'22)
Lieu de la manifestation :
Basel, Suisse
Date de la manifestation :
July 11-13, 2022
Manifestation à portée :
International
Titre de l'ouvrage principal :
21st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC'22)
Maison d'édition :
IEEE Computer Society, Basel, Suisse
Peer reviewed :
Peer reviewed
Focus Area :
Computational Sciences
URL complémentaire :
Disponible sur ORBilu :
depuis le 04 juillet 2022

Statistiques


Nombre de vues
341 (dont 20 Unilu)
Nombre de téléchargements
125 (dont 12 Unilu)

citations Scopus®
 
1
citations Scopus®
sans auto-citations
0

Bibliographie


Publications similaires



Contacter ORBilu