[en] The increase in complexity, diversity and scale of high performance computing environments, as well as the increasing sophistication of parallel applications and algorithms call for productivity-aware programming languages for high-performance computing. Among them, the Chapel programming language stands out as one of the more successful approaches based on the Partitioned Global Address Space programming model. Although Chapel is designed for productive parallel computing at scale, the question of its competitiveness with well-established conventional parallel programming environments arises. To this end, this work compares the performance of Chapel-based fractal generation on shared- and distributed-memory platforms with corresponding OpenMP and MPI+X implementations. The parallel computation of the Mandelbrot set is chosen as a test-case for its high degree of parallelism and its irregular workload. Experiments are performed on a cluster composed of 192 cores using the French national testbed Grid'5000. Chapel as well as its default tasking layer demonstrate high performance in shared-memory context, while Chapel competes with hybrid MPI+OpenMP in distributed-memory environment.
George Almasi. 2011. PGAS (partitioned global address space) languages. In Encyclopedia of Parallel Computing. Springer, 1539-154
T. Carneiro, J. Gmys, N. Melab, and D. Tuyttens. 2020. Towards ultrascale Branch-And-Bound using a high-productivity language. Future Gener. Comput. Syst. 105 (2020), 196-209.
B. Chamberlain, E. Ronaghan, B. Albrecht, L. Duncan, M. Ferguson, B. Harshbarger, D. Iten, D. Keaton, V. Litvinov, P. Sahabu, and G. Titus. 2018. Chapel comes of age: Making scalable programming productive.
Chapel 2022. The Chapel Parallel Programming Language. https: //chapel-lang.org
S. Faulk, J. Gustafson, P. Johnson, A. Porter, W. Tichy, and L. Votta. 2004. Measuring HPC productivity. International Journal of High Performance Computing Applications 18 (2004).
J. Gmys, T. Carneiro, N. Melab, E.-G. Talbi, and D. Tuyttens. 2020. A comparative study of high-productivity high-performance programming languages for parallel metaheuristics. Swarm and Evolutionary Computation 57 (June 2020).
E. Gomez. 2020. MPI vs OpenMP: A case study on parallel generation of Mandelbrot set.
J. Kepner. 2004. HPC Productivity: An Overarching View. The International Journal of High Performance Computing Applications 18, 4 (2004), 393-397.
D. Khaldi, P. Jouvelot, C. Ancourt, and F. Irigoin. 2013. Task Parallelism and Data Distribution: An Overview of Explicit Parallel Programming Languages. In Languages and Compilers for Parallel Computing, Hironori Kasahara and Keiji Kimura (Eds.). Springer Berlin Heidelberg, 174-189.
E. Lusk and K. Yelick. 2007. Languages for High-Productivity Computing: The DARPA HPCS Language Project. Parallel Processing Letters 17 (2007), 89-102.
Open MPI 2022. Open MPI: Open Source High Performance Computing. https://www.open-mpi.org
OpenMP API 2022. The OpenMP API specification for parallel programming. https://www.openmp.org
M. Parenteau, S. Bourgault-Cote, F. Plante, E. Kayraklioglu, and E. Laurendeau. [n.d.]. Development of Parallel CFD Applications with the Chapel Programming Language.
K. Wheeler, R. Murphy, D. Stark, and B. Chamberlain. 2011. The Chapel Tasking Layer Over Qthreads.
K. Wheeler, R. Murphy, and D. Thain. 2008. Qthreads: An API for programming with millions of lightweight threads. In 2008 IEEE International Symposium on Parallel and Distributed Processing. 1-8.