References of "Cartiaux, Hyacinthe 50001195"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailAggregating and Consolidating two High Performant Network Topologies: The ULHPC Experience
Varrette, Sébastien UL; Cartiaux, Hyacinthe UL; Valette, Teddy UL et al

in ACM Practice and Experience in Advanced Research Computing (PEARC'22) (2022, July)

High Performance Computing (HPC) encompasses advanced computation over parallel processing. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores, their ... [more ▼]

High Performance Computing (HPC) encompasses advanced computation over parallel processing. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores, their utilisation factor and, of course, the inter- connect performance, efficiency, and scalability. In practice, this last component and the associated topology remains the most significant differentiators between HPC systems and lesser perfor- mant systems. The University of Luxembourg operates since 2007 a large academic HPC facility which remains one of the reference implementation within the country and offers a cutting-edge re- search infrastructure to Luxembourg public research. The main high-bandwidth low-latency network of the operated facility relies on the dominant interconnect technology in the HPC market i.e., Infiniband (IB) over a Fat-tree topology. It is complemented by an Ethernet-based network defined for management tasks, external access and interactions with user’s applications that do not support Infiniband natively. The recent acquisition of a new cutting-edge supercomputer Aion which was federated with the previous flag- ship cluster Iris was the occasion to aggregate and consolidate the two types of networks. This article depicts the architecture and the solutions designed to expand and consolidate the existing networks beyond their seminal capacity limits while keeping at best their Bisection bandwidth. At the IB level, and despite moving from a non-blocking configuration, the proposed approach defines a blocking topology maintaining the previous Fat-Tree height. The leaf connection capacity is more than tripled (moving from 216 to 672 end-points) while exhibiting very marginal penalties, i.e. less than 3% (resp. 0.3%) Read (resp. Write) bandwidth degradation against reference parallel I/O benchmarks, or a stable and sustain- able point-to-point bandwidth efficiency among all possible pairs of nodes (measured above 95.45% for bi-directional streams). With regards the Ethernet network, a novel 2-layer topology aiming for improving the availability, maintainability and scalability of the interconnect is described. It was deployed together with consistent network VLANs and subnets enforcing strict security policies via ACLs defined on the layer 3, offering isolated and secure net- work environments. The implemented approaches are applicable to a broad range of HPC infrastructures and thus may help other HPC centres to consolidate their own interconnect stacks when designing or expanding their network infrastructures. [less ▲]

Detailed reference viewed: 60 (21 UL)
Full Text
Peer Reviewed
See detailManagement of an Academic HPC Research Computing Facility: The ULHPC Experience 2.0
Varrette, Sébastien UL; Cartiaux, Hyacinthe UL; Peter, Sarah UL et al

in 6th High Performance Computing and Cluster Technologies Conference (HPCCT 2022) (2022, July)

With the advent of the technological revolution and the digital transformation that made all scientific disciplines becoming computational, the need for High Performance Computing (HPC) has become and a ... [more ▼]

With the advent of the technological revolution and the digital transformation that made all scientific disciplines becoming computational, the need for High Performance Computing (HPC) has become and a strategic and critical asset to leverage new research and business in all domains requiring computing and storage performance. Since 2007, the University of Luxembourg operates a large academic HPC facility which remains the reference implementation within the country. This paper provides a general description of the current platform implementation as well as its operational management choices which have been adapted to the integration of a new liquid-cooled supercomputer, named Aion, released in 2021. The administration of a HPC facility to provide state-of-art computing systems, storage and software is indeed a complex and dynamic enterprise with the soul purpose to offer an enhanced user experience for intensive research computing and large-scale analytic workflows. Most design choices and feedback described in this work have been motivated by several years of experience in addressing in a flexible and convenient way the heterogeneous needs inherent to an academic environment towards research excellence. The different layers and stacks used within the operated facilities are reviewed, in particular with regards the user software management, or the adaptation of the Slurm Resource and Job Management System (RJMS) configuration with novel incentives mechanisms. In practice, the described and implemented environment brought concrete and measurable improvements with regards the platform utilization (+12,64%), jobs efficiency (average Wall-time Request Accuracy improved by 110,81%), the management and funding (increased by 10%). Thorough performance evaluation of the facility is also presented in this paper through reference benchmarks such as HPL, HPCG, Graph500, IOR or IO500. It reveals sustainable and scalable performance comparable to the most powerful supercomputers in the world, including for energy-efficient metrics (for instance, 5,19 GFlops/W (resp. 6,14 MTEPS/W) were demonstrated for full HPL (resp. Graph500) runs across all Aion nodes). [less ▲]

Detailed reference viewed: 124 (46 UL)
Full Text
Peer Reviewed
See detailRESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility
Varrette, Sébastien UL; Kieffer, Emmanuel UL; Pinel, Frederic UL et al

in ACM Practice and Experience in Advanced Research Computing (PEARC'21) (2021, July)

High Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research and the business performed in all areas requiring intensive computing and large ... [more ▼]

High Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research and the business performed in all areas requiring intensive computing and large-scale Big Data analytic capabilities. The efficient exploitation of heterogeneous computing resources featuring different processor architectures and generations, coupled with the eventual presence of GPU accelerators, remains a challenge. The University of Luxembourg operates since 2007 a large academic HPC facility which remains one of the reference implementation within the country and offers a cutting-edge research infrastructure to Luxembourg public research. The HPC support team invests a significant amount of time (i.e., several months of effort per year) in providing a software environment optimised for hundreds of users, but the complexity of HPC software was quickly outpacing the capabilities of classical software management tools. Since 2014, our scientific software stack is generated and deployed in an automated and consistent way through the RESIF framework, a wrapper on top of Easybuild and Lmod [5] meant to efficiently handle user software generation. A large code refactoring was performed in 2017 to better handle different software sets and roles across multiple clusters, all piloted through a dedicated control repository. With the advent in 2020 of a new supercomputer featuring a different CPU architecture, and to mitigate the identified limitations of the existing framework, we report in this state-of-practice article RESIF 3.0, the latest iteration of our scientific software management suit now relying on streamline Easybuild. It permitted to reduce by around 90% the number of custom configurations previously enforced by specific Slurm and MPI settings, while sustaining optimised builds coexisting for different dimensions of CPU and GPU architectures. The workflow for contributing back to the Easybuild community was also automated and a current work in progress aims at drastically decrease the building time of a complete software set generation. Overall, most design choices for our wrapper have been motivated by several years of experience in addressing in a flexible and convenient way the heterogeneous needs inherent to an academic environment aiming for research excellence. As the code base is available publicly, and as we wish to transparently report also the pitfalls and difficulties met, this tool may thus help other HPC centres to consolidate their own software management stack. [less ▲]

Detailed reference viewed: 342 (39 UL)
See detailUL HPC Tutorial: Advanced Job scheduling with SLURM and OAR
Plugaru, Valentin UL; Varrette, Sébastien UL; Diehl, Sarah UL et al

Presentation (2018, June)

Detailed reference viewed: 114 (5 UL)
See detailUL HPC Tutorial: Parallel computations with OpenMP/MPI
Varrette, Sébastien UL; Plugaru, Valentin UL; Diehl, Sarah UL et al

Presentation (2018, June)

Detailed reference viewed: 74 (3 UL)
See detailUL HPC Tutorial: Bio-informatics workflows and applications
Plugaru, Valentin UL; Diehl, Sarah UL; Varrette, Sébastien UL et al

Presentation (2018, June)

Detailed reference viewed: 77 (3 UL)
See detailUL HPC Tutorial: Statistical Computing with R
Ginolhac, Aurélien UL; Emeras, Joseph; Varrette, Sébastien UL et al

Presentation (2018, June)

Detailed reference viewed: 134 (21 UL)
See detailUL HPC Tutorial: (Advanced) Prototyping with Python
Parisot, Clément UL; Diehl, Sarah UL; Varrette, Sébastien UL et al

Presentation (2018, June)

Detailed reference viewed: 90 (4 UL)
See detailUL HPC Tutorial: HPC Containers with Singularity
Plugaru, Valentin UL; Varrette, Sébastien UL; Diehl, Sarah UL et al

Presentation (2018, June)

Detailed reference viewed: 127 (7 UL)
See detailUL HPC Tutorial: HPC workflow with sequential jobs
Cartiaux, Hyacinthe UL; Varrette, Sébastien UL; Plugaru, Valentin UL et al

Presentation (2018, June)

Detailed reference viewed: 138 (11 UL)
Full Text
See detailLarge-scale research data management: Road to GDPR compliance
Bouvry, Pascal UL; Varrette, Sébastien UL; Plugaru, Valentin UL et al

Presentation (2018, April)

Detailed reference viewed: 179 (14 UL)