[en] The increase in complexity, diversity and scale of high performance computing environments, as well as the increasing sophistication of parallel applications and algorithms call for productivity-aware programming languages for high-performance computing. Among them, the Chapel programming language stands out as one of the more successful approaches based on the Partitioned Global Address Space programming model. Although Chapel is designed for productive parallel computing at scale, the question of its competitiveness with well-established conventional parallel programming environments arises. To this end, this work compares the performance of Chapel-based fractal generation on shared- and distributed-memory platforms with corresponding OpenMP and MPI+X implementations. The parallel computation of the Mandelbrot set is chosen as a test-case for its high degree of parallelism and its irregular workload. Experiments are performed on a cluster composed of 192 cores using the French national testbed Grid'5000. Chapel as well as its default tasking layer demonstrate high performance in shared-memory context, while Chapel competes with hybrid MPI+OpenMP in distributed-memory environment.
Disciplines :
Computer science
Author, co-author :
HELBECQUE, Guillaume ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PCOG ; Université de Lille, CNRS/CRIStAL UMR 9189, Centre Inria de l’Université de Lille, France
GMYS, Jan; Université de Lille, CNRS/CRIStAL UMR 9189, Centre Inria de l’Université de Lille, France
CARNEIRO PESSOA, Tiago ; University of Luxembourg > Faculty of Science, Technology and Medicine > Department of Computer Science > Team Pascal BOUVRY
MELAB, Nouredine; Université de Lille, CNRS/CRIStAL UMR 9189, Centre Inria de l’Université de Lille, France
BOUVRY, Pascal ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
External co-authors :
yes
Language :
English
Title :
A performance-oriented comparative study of the Chapel high-productivity language to conventional programming environments
Publication date :
18 April 2022
Event name :
13th International Workshop on Programming Models and Applications for Multicores and Manycores
Event place :
Seoul, South Korea
Event date :
from 02-04-2022 to 06-04-2022
Audience :
International
Main work title :
PMAM '22: Proceedings of the Thirteenth International Workshop on Programming Models and Applications for Multicores and Manycores
Publisher :
Association for Computing Machinery, New York, United States
George Almasi. 2011. PGAS (partitioned global address space) languages. In Encyclopedia of Parallel Computing. Springer, 1539-154
T. Carneiro, J. Gmys, N. Melab, and D. Tuyttens. 2020. Towards ultrascale Branch-And-Bound using a high-productivity language. Future Gener. Comput. Syst. 105 (2020), 196-209.
B. Chamberlain, E. Ronaghan, B. Albrecht, L. Duncan, M. Ferguson, B. Harshbarger, D. Iten, D. Keaton, V. Litvinov, P. Sahabu, and G. Titus. 2018. Chapel comes of age: Making scalable programming productive.
Chapel 2022. The Chapel Parallel Programming Language. https: //chapel-lang.org
S. Faulk, J. Gustafson, P. Johnson, A. Porter, W. Tichy, and L. Votta. 2004. Measuring HPC productivity. International Journal of High Performance Computing Applications 18 (2004).
J. Gmys, T. Carneiro, N. Melab, E.-G. Talbi, and D. Tuyttens. 2020. A comparative study of high-productivity high-performance programming languages for parallel metaheuristics. Swarm and Evolutionary Computation 57 (June 2020).
E. Gomez. 2020. MPI vs OpenMP: A case study on parallel generation of Mandelbrot set.
J. Kepner. 2004. HPC Productivity: An Overarching View. The International Journal of High Performance Computing Applications 18, 4 (2004), 393-397.
D. Khaldi, P. Jouvelot, C. Ancourt, and F. Irigoin. 2013. Task Parallelism and Data Distribution: An Overview of Explicit Parallel Programming Languages. In Languages and Compilers for Parallel Computing, Hironori Kasahara and Keiji Kimura (Eds.). Springer Berlin Heidelberg, 174-189.
E. Lusk and K. Yelick. 2007. Languages for High-Productivity Computing: The DARPA HPCS Language Project. Parallel Processing Letters 17 (2007), 89-102.
Open MPI 2022. Open MPI: Open Source High Performance Computing. https://www.open-mpi.org
OpenMP API 2022. The OpenMP API specification for parallel programming. https://www.openmp.org
M. Parenteau, S. Bourgault-Cote, F. Plante, E. Kayraklioglu, and E. Laurendeau. [n.d.]. Development of Parallel CFD Applications with the Chapel Programming Language.
K. Wheeler, R. Murphy, D. Stark, and B. Chamberlain. 2011. The Chapel Tasking Layer Over Qthreads.
K. Wheeler, R. Murphy, and D. Thain. 2008. Qthreads: An API for programming with millions of lightweight threads. In 2008 IEEE International Symposium on Parallel and Distributed Processing. 1-8.