[en] High Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research and the business performed in all areas requiring intensive computing and large-scale Big Data analytic capabilities. The efficient exploitation of heterogeneous computing resources featuring different processor architectures and generations, coupled with the eventual presence of GPU accelerators, remains a challenge. The University of Luxembourg operates since 2007 a large academic HPC facility which remains one of the reference implementation within the country and offers a cutting-edge research infrastructure to Luxembourg public research. The HPC support team invests a significant amount of time (i.e., several months of effort per year) in providing a software environment optimised for hundreds of users, but the complexity of HPC software was quickly outpacing the capabilities of classical software management tools. Since 2014, our scientific software stack is generated and deployed in an automated and consistent way through the RESIF framework, a wrapper on top of Easybuild and Lmod [5] meant to efficiently handle user software generation. A large code refactoring was performed in 2017 to better handle different software sets and roles across multiple clusters, all piloted through a dedicated control repository. With the advent in 2020 of a new supercomputer featuring a different CPU architecture, and to mitigate the identified limitations of the existing framework, we report in this state-of-practice article RESIF 3.0, the latest iteration of our scientific software management suit now relying on streamline Easybuild. It permitted to reduce by around 90% the number of custom configurations previously enforced by specific Slurm and MPI settings, while sustaining optimised builds coexisting for different dimensions of CPU and GPU architectures. The workflow for contributing back to the Easybuild community was also automated and a current work in progress aims at drastically decrease the building time of a complete software set generation. Overall, most design choices for our wrapper have been motivated by several years of experience in addressing in a flexible and convenient way the heterogeneous needs inherent to an academic environment aiming for research excellence. As the code base is available publicly, and as we wish to transparently report also the pitfalls and difficulties met, this tool may thus help other HPC centres to consolidate their own software management stack.
Centre de recherche :
ULHPC - University of Luxembourg: High Performance Computing
Disciplines :
Sciences informatiques
Auteur, co-auteur :
VARRETTE, Sébastien ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
KIEFFER, Emmanuel ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
PINEL, Frederic ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
KRISHNASAMY, Ezhilmathi ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PCOG
PETER, Sarah ; University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Bioinformatics Core
CARTIAUX, Hyacinthe ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
BESSERON, Xavier ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Engineering (DoE)
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility
Date de publication/diffusion :
juillet 2021
Nom de la manifestation :
ACM Practice and Experience in Advanced Research Computing (PEARC'21)
Date de la manifestation :
July 19-22, 2021
Manifestation à portée :
International
Titre de l'ouvrage principal :
ACM Practice and Experience in Advanced Research Computing (PEARC'21)
Maison d'édition :
Association for Computing Machinery (ACM), Virtual Event, Inconnu/non spécifié
O. Ben-Kiki, C. Evans, and B. Ingerson. 2009 YAML Ain?t Markup Language.
R. H. Castain, J. Hursey, A. Bouteiller, and D. Solt. 2018 PMIx: Process management for exascale environments. Parallel Comput. 79 (2018), 9-29.
R. Falke, R. Klein, R. Koschke, and J. Quante. 2005 the Dominance Tree in Visualizing Software Dependencies. In 3rd IEEE Intl. W. on Visualizing Software for Understanding and Analysis. IEEE, Budapest, Hungary, 1-6.
T. Gamblin, M. LeGendre, M. R. Collette, G. L. Lee, A. Moody, B. R. de Supinski, and S. Futral. 2015 The Spack package manager: bringing order to HPC software chaos. In SC ?15: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, Austin, TX, USA, 1-12.
M. Geimer, K. Hoste, and R. McLay. 2014 Modern Scientific Software Management Using EasyBuild and Lmod. In 2014 First International Workshop on HPC User Support Tools. IEEE, New Orleans, LA, USA, 41-51. https://doi.org/10.1109/HUST.2014.8
S. Khuvis, Z-Q. You, H. Na, S. Brozell, E. Franz, T. Dockendorf, J. Gardiner, and K. Tomko. 2019 A Continuous Integration-Based Framework for Software Management. In Proc. of the Practice and Experience in Advanced Research Computing (PEARC?19). ACM, New York, NY, USA, 1-7.
D. Matthews and W. Limberg. 2018 MkDocs: documentation with Markdown. mkdocs.org.
R. Mc.Lay. 2013 LMod: A New Environment Module System. https://lmod.rtfd.io.
S. Varrette, P. Bouvry, H. Cartiaux, and F. Georgatos. 2014 Management of an Academic HPC Cluster: The UL Experience. In Proc. of the 2014 Intl. Conf. on High Performance Computing & Simulation (HPCS 2014). IEEE, Bologna, Italy, 959-967.