Full Text
See detailHigh Performance Parallel Coupling of OpenFOAM+XDEM
Besseron, Xavier; Pozzetti, Gabriele; Rousset, Alban; Mainassara Chekaraou, Abdoul Wahid; Peters, Bernhard

Presentation (2019, June 21)

Full Text
See detailShort Introduction to the Roofline Model
Besseron, Xavier

Presentation (2019, June 20)

Full Text
See detailCo-located Partitioning Strategy and Dual-grid Multiscale Approach for Parallel Coupling of CFD-DEM Simulations
Besseron, Xavier; Pozzetti, Gabriele; Rousset, Alban; Mainassara Chekaraou, Abdoul Wahid; Peters, Bernhard

Presentation (2019, June 05)

Full Text
See detailA parallel dual-grid multiscale approach to CFD-DEM couplings
Pozzetti, Gabriele; Jasak, Hrvoje; Besseron, Xavier; Rousset, Alban; Peters, Bernhard

in Journal of Computational Physics (2019), 378

In this work, a new parallel dual-grid multiscale approach for CFD-DEM couplings is investigated. Dual- grid multiscale CFD-DEM couplings have been recently developed and successfully adopted in different applications still, an efficient parallelization for such a numerical method represents an open issue. Despite its ability to provide grid convergent solutions and more accurate results than standard CFD-DEM couplings, this young numerical method requires good parallel performances in order to be applied to large-scale problems and, therefore, extend its range of application. The parallelization strategy here proposed aims to take advantage of the enhanced complexity of a dual-grid coupling to gain more flexibility in the domain partitioning while keeping a low inter-process communication cost. In particular, it allows avoiding inter- process communication between CFD and DEM software and still allows adopting complex partitioning strategies thanks to an optimized grid-based communication. It is shown how the parallelized multiscale coupling holds all its natural advantages over a mono-scale coupling and can also have better parallel performance. Three benchmark cases are presented to assess the accuracy and performance of the strategy. It is shown how the proposed method allows maintaining good parallel performance when operated over 1000 processes.

Full Text
See detailThe XDEM Multi-physics and Multi-scale Simulation Technology: Review on DEM-CFD Coupling, Methodology and Engineering Applications
Peters, Bernhard; Baniasadi, Maryam; Baniasadi, Mehdi; Besseron, Xavier; Estupinan Donoso, Alvaro Antonio; Mohseni, Seyedmohammad; Pozzetti, Gabriele

in Particuology (2019), 44

The XDEM multi-physics and multi-scale simulation platform roots in the Ex- tended Discrete Element Method (XDEM) and is being developed at the In- stitute of Computational Engineering at the University of Luxembourg. The platform is an advanced multi- physics simulation technology that combines flexibility and versatility to establish the next generation of multi-physics and multi-scale simulation tools. For this purpose the simulation framework relies on coupling various predictive tools based on both an Eulerian and Lagrangian approach. Eulerian approaches represent the wide field of continuum models while the Lagrange approach is perfectly suited to characterise discrete phases. Thus, continuum models include classical simulation tools such as Computa- tional Fluid Dynamics (CFD) or Finite Element Analysis (FEA) while an ex- tended configuration of the classical Discrete Element Method (DEM) addresses the discrete e.g. particulate phase. Apart from predicting the trajectories of individual particles, XDEM extends the application to estimating the thermo- dynamic state of each particle by advanced and optimised algorithms. The thermodynamic state may include temperature and species distributions due to chemical reaction and external heat sources. Hence, coupling these extended features with either CFD or FEA opens up a wide range of applications as diverse as pharmaceutical industry e.g. drug production, agriculture food and processing industry, mining, construction and agricultural machinery, metals manufacturing, energy production and systems biology.

See detailSecurity, reliability and regulation compliance in Ultrascale Computing System
Bouvry, Pascal; Varrette, Sébastien; Wasim, Muhammad Umer; Ibrahim, Abdallah Ali Zainelabden Abdallah; Besseron, Xavier; Trinh, T. A.

in Carretero, J.; Jeannot, E. (Eds.) Ultrascale Computing Systems (2018)

Ultrascale Computing Systems (UCSs) are envisioned as large-scale complex systems joining parallel and distributed computing systems that will be two to three orders of magnitude larger than today’s systems (considering the number of Central Process Unit (CPU) cores). It is very challenging to find sustainable solutions for UCSs due to their scale and a wide range of possible applications and involved technologies. For example, we need to deal with heterogeneity and cross fertilization among HPC, large-scale distributed systems, and big data management. One of the challenges regarding sustainable UCSs is resilience. Another one, which attracted less interest in the literature but becomes more and more crucial with the expected convergence with the Cloud computing paradigm, is the notion of regulation in such system to assess the Quality of Service (QoS) and Service Level Agreement (SLA) proposed for the use of these platforms. This chapter covers both aspects through the reproduction of two articles: [1] and [2].

Full Text
See detailVerlet buffer for broad phase interaction detection in Discrete Element Method
Mainassara Chekaraou, Abdoul Wahid; Rousset, Alban; Besseron, Xavier; Peters, Bernhard

Poster (2018, September 24)

The Extended Discrete Element Method (XDEM) is a novel and innovative numerical simulation technique that extends the dynamics of granular materials or particles as described through the classical discrete element method (DEM) by additional properties such as the thermodynamic state, stress/strain for each particle. Such DEM simulations used by industries to set up their experimental processes are complexes and heavy in computation time. Therefore, simulations have to be precise, efficient and fast in order to be able to process hundreds of millions of particles. To tackle this issue, such DEM simulations are usually parallelized with MPI. One of the most expensive computation parts of a DEM simulation is the collision detection of particles. It is classically divided into two steps: the broad phase and the narrow phase. The broad phase uses simplified bounding volumes to perform an approximated but fast collision detection. It returns a list of particle pairs that could interact. The narrow phase is applied to the result of the broad phase and returns the exact list of colliding particles. The goal of this research is to apply a Verlet buffer method to (X)DEM simulations regardless of which broad phase algorithm is used. We rely on the fact that such DEM simulations are temporal coherent: the neighborhood only changes slightly from the last time-step to the current time-step. We use the Verlet buffer method to extend the list of pairs returned by the broad phase by stretching the particles bounding volume with an extension range. This allows re-using the result of the broad phase for several time-steps before an update is required once again and thereby its reduce the number of times the broad phase is executed. We have implemented a condition based on particles displacements to ensure the validity of the broad phase: a new one is executed to update the list of colliding particles only when necessary. This guarantees identical results because approximations introduced in the broad phase by our approach are corrected in the narrow phase which is executed at every time-steps anyway. We perform an extensive study to evaluate the influence of the Verlet extension range on the performance of the execution in terms of computation time and memory consumption. We consider different test-cases, partitioners (ORB, Zoltan, METIS, SCOTCH, ...), broad phase algorithms (Link cell, Sweep and prune, ...) and grid configurations (fine, coarse), sequential and parallel (up to 280 cores). While a larger Verlet buffer increases the cost of the broad phase and narrow phase, it also allows skipping a significant number of broad phase execution (> 99 \%). As a consequence, our first results show that this approach can speeds up the total .execution time up to a factor of 5 for sequential executions, and up to a factor of 3 parallel executions on 280 cores while maintaining a reasonable memory consumption.

Full Text
See detailA co-located partitions strategy for parallel CFD-DEM couplings
Pozzetti, Gabriele; Besseron, Xavier; Rousset, Alban; Peters, Bernhard

in Advanced Powder Technology (2018)

In this work, a new partition-collocation strategy for the parallel execution of CFD–DEM couplings is investigated. Having a good parallel performance is a key issue for an Eulerian-Lagrangian software that aims to be applied to solve industrially significant problems, as the computational cost of these couplings is one of their main drawback. The approach presented here consists in co-locating the overlapping parts of the simulation domain of each software on the same MPI process, in order to reduce the cost of the data exchanges. It is shown how this strategy allows reducing memory consumption and inter-process communication between CFD and DEM to a minimum and therefore to overcome an important parallelization bottleneck identified in the literature. Three benchmarks are proposed to assess the consistency and scalability of this approach. A coupled execution on 280 cores shows that less than 0.1% of the time is used to perform inter-physics data exchange.

Full Text
See detailHybrid MPI+OpenMP Implementation of eXtended Discrete Element Method
Mainassara Chekaraou, Abdoul Wahid; Rousset, Alban; Besseron, Xavier; Varrette, Sébastien; Peters, Bernhard

in Proc. of the 9th Workshop on Applications for Multi-Core Architectures (WAMCA'18), part of 30th Intl. Symp. on Computer Architecture and High Performance Computing (SBAC-PAD 2018) (2018, September)

The Extended Discrete Element Method (XDEM) is a novel and innovative numerical simulation technique that ex- tends classical Discrete Element Method (DEM) (which simulates the motion of granular material), by additional properties such as the chemical composition, thermodynamic state, stress/strain for each particle. It has been applied successfully to numerous industries involving the processing of granular materials such as sand, rock, wood or coke [16], [17]. In this context, computational simulation with (X)DEM has become a more and more essential tool for researchers and scientific engineers to set up and explore their experimental processes. However, increasing the size or the accuracy of a model requires the use of High Performance Computing (HPC) platforms over a parallelized implementation to accommodate the growing needs in terms of memory and computation time. In practice, such a parallelization is traditionally obtained using either MPI (distributed memory computing), OpenMP (shared memory computing) or hybrid approaches combining both of them. In this paper, we present the results of our effort to implement an OpenMP version of XDEM allowing hybrid MPI+OpenMP simulations (XDEM being already parallelized with MPI). Far from the basic OpenMP paradigm and recommendations (which simply summarizes by decorating the main computation loops with a set of OpenMP pragma), the OpenMP parallelization of XDEM required a fundamental code re-factoring and careful tuning in order to reach good performance. There are two main reasons for those difficulties. Firstly, XDEM is a legacy code devel- oped for more than 10 years, initially focused on accuracy rather than performance. Secondly, the particles in a DEM simulation are highly dynamic: they can be added, deleted and interaction relations can change at any timestep of the simulation. Thus this article details the multiple layers of optimization applied, such as a deep data structure profiling and reorganization, the usage of fast multithreaded memory allocators and of advanced process/thread-to-core pinning techniques. Experimental results evaluate the benefit of each optimization individually and validate the implementation using a real-world application executed on the HPC platform of the University of Luxembourg. Finally, we present our Hybrid MPI+OpenMP results with a 15%-20% performance gain and how it overcomes scalability limits (by increasing the number of compute cores without dropping of performances) of XDEM-based pure MPI simulations.

Full Text
See detailParallel Coupling of CFD-DEM simulations
Besseron, Xavier; Pozzetti, Gabriele; Rousset, Alban; Peters, Bernhard

Presentation (2018, August 20)

Full Text
See detailHigh Performance Computing and Big Data analytics in Luxembourg: Overview and Challenges in the EuroHPC horizon
Besseron, Xavier; Varrette, Sébastien

Presentation (2018, August)

Accelerating modelling and simulation in the data deluge era requires the appropriate hardware and infrastructure at scale. The University of Luxembourg is active since 2007 to develop its own infrastructure and expertise in the HPC and BD domains. The current state of developments will be briefly reviewed in the context of the national and European HPC strategy in which Luxembourg is starting to play a role.

Full Text
See detailParallel Coupling of CFD-DEM simulations
Pozzetti, Gabriele; Besseron, Xavier; Rousset, Alban; Peters, Bernhard

Presentation (2018, August)

Eulerian-Lagrangian couplings are nowadays widely used to address engineering and technical problems. In particular, CFD-DEM couplings have been successfully applied to study several configurations ranging from mechanical, to chemical and environmental engineering. However, such simulations are normally very computationally intensive, and the execution time represents a major issue for the applicability of this numerical approach to complex scenarios. With this work, we introduce a novel coupling approach aiming at improving the performance of the parallel CFD-DEM simulations. This strategy relies on two points. First, we propose a new partition-collocation strategy for the parallel execution of CFD–DEM couplings, which can considerably reduce the amount of inter-process communication between the CFD and DEM parts. However, this strategy imposes some alignment constraints on the CFD mesh. Secondly, we adopt a dual-grid multiscale scheme for the CFD-DEM coupling, that is known to offer better numerical properties, and that allows us to obtain more flexibility on the domain partitioning overcoming the alignment constraints. We assess the correctness and performance of our approach on elementary benchmarks and at a large scale with a realistic test-case. The results show a significant performance improvement compared to other state-of-art CFD-DEM couplings presented in the literature.

See detailUL HPC Tutorial: Performance engineering - HPC debugging and profilling
Plugaru, Valentin; Besseron, Xavier; Varrette, Sébastien; Diehl, Sarah; Parisot, Clément; Cartiaux, Hyacinthe; Bouvry, Pascal

Presentation (2018, June)

Full Text
See detailHydrodynamic Analysis of Gas-Liquid-Liquid-Solid Reactors using the XDEM Numerical Approach
Baniasadi, Maryam; Peters, Bernhard; Baniasadi, Mehdi; Besseron, Xavier

in Canadian Journal of Chemical Engineering (2018)

Multiphase reactors are abundantly used in many industries. Among them, few reactors deal with four phases called gas-liquid-liquid-solid systems, which received less attention due to their complex situation. Numerical study of such complex systems is not easy and requires loads of computational effort. In this study, a discrete-continuous numerical model known as eXtended discrete element method (XDEM) is proposed to investigate the hydrodynamic behaviour of fluid phases passing through the packed bed of solid particles. This model is applied to the dripping zone of a blast furnace. In this zone, two distinct liquid phases, namely liquid iron and slag, flow through a pile of coke particles while exchanging momentum. In this work, besides the solid-fluid and gas-liquid interactions, the liquid-liquid interactions are also studied and the phases' mutual effects are discussed. In addition, a sensitivity study on the slag viscosity is performed, which shows the importance of liquid phase properties on the system behaviour. The results evaluation shows that the liquid iron accelerates the downward flow of the slag and the slag decelerates the downward flow of the liquid iron phase due to the resistance force caused by their relative velocity.

Full Text
See detailA Parallel Multiscale DEM-VOF Method For Large-Scale Simulations Of Three-Phase Flows
Pozzetti, Gabriele; Besseron, Xavier; Rousset, Alban; Peters, Bernhard

in Proceedings of ECCM-ECFD 2018 (2018)

A parallel dual-grid multiscale DEM-VOF coupling is here investigated. Dual- grid multiscale couplings have been recently used to address different engineering problems involving the interaction between granular phases and complex fluid flows. Nevertheless, previous studies did not focus on the parallel performance of such a coupling and were, therefore, limited to relatively small applications. In this contribution, we propose an insight into the performance of the dual-grid multiscale DEM-VOF method for three- phase flows when operated in parallel. In particular,we focus on a famous benchmark case for three-phase flows and assess the influence of the partitioning algorithm on the scalability of the dual-grid algorithm.

Full Text
See detailRapidRMSD: Rapid determination of RMSDs corresponding to motions of flexible molecules
Neveu, Emilie; Popov, Petr; Hoffmann, Alexandre; Migliosi, Angelo; Besseron, Xavier; Danoy, Grégoire; Bouvry, Pascal; Grudinin, Sergei

in Bioinformatics (2018)

Full Text
See detailParallelizing XDEM: Load-balancing policies and efficiency, a study
Rousset, Alban; Besseron, Xavier; Peters, Bernhard

Scientific Conference (2017, September)

In XDEM, the simulation domain is geometrically decomposed in regular fixed-size cells that are used to distribute the workload between the processes. The role of the partitioning algorithm is to distribute the cells among all the processes in order to balance the workload. To accomplish this task, the partitioning algorithm relies on a computing/communication cost that has been estimated for each cell. A proper estimation of these costs is fundamental to obtain pertinent results during this phase. The study in the work is twofold. First, we integrate five partitioning algorithms (ORB, RCB, RIB, kway and PhG) in the XDEM framework [1]. Most of these algorithms are implemented within the Zoltan library [2], a parallel framework for partitioning and ordering problems. Secondly, we propose different policies to estimate the computing cost and communication cost of the different cells composing the simulation domain. Then, we present an experimental evaluation and a performance comparison of these partitioning algorithms and cost-estimation policies on a large scale parallel execution of XDEM running on the HPC platform of the University of Luxembourg. Finally, after explaining the pros and cons of each partitioning algorithms and cost-estimation policies, we discuss on the best choices to adopt depending on the simulation case.

Full Text
See detailOn the performance of an overlapping-domain parallelization strategy for Eulerian-Lagrangian Multiphysics software
Pozzetti, Gabriele; Besseron, Xavier; Rousset, Alban; Mainassara Chekaraou, Abdoul Wahid; Peters, Bernhard

in AIP Conference Proceedings ICNAAM 2017 (2017, September)

In this work, a strategy for the parallelization of a two-way CFD-DEM coupling is investigated. It consists on adopting balanced overlapping partitions for the CFD and the DEM domains, that aims to reduce the memory consumption and inter-process communication between CFD and DEM. Two benchmarks are proposed to assess the consistency and scalability of this approach, coupled execution on 252 cores shows that less than 1\% of time is used to perform inter-physics data exchange.

Full Text
See detailComparing Broad-Phase Interaction Detection Algorithms for Multiphysics DEM Applications
Rousset, Alban; Mainassara Chekaraou, Abdoul Wahid; Liao, Yu-Chung; Besseron, Xavier; Varrette, Sébastien; Peters, Bernhard

in AIP Conference Proceedings ICNAAM 2017 (2017, September)

Collision detection is an ongoing source of research and optimization in many fields including video-games and numerical simulations [6, 7, 8]. The goal of collision detection is to report a geometric contact when it is about to occur or has actually occurred. Unfortunately, detailed and exact collision detection for large amounts of objects represent an immense amount of computations, naively n 2 operation with n being the number of objects [9]. To avoid and reduce these expensive computations, the collision detection is decomposed in two phases as it shown on Figure 1: the Broad-Phase and the Narrow-Phase. In this paper, we focus on Broad-Phase algorithm in a large dynamic three-dimensional environment. We studied two kinds of Broad-Phase algorithms: spatial partitioning and spatial sorting. Spatial partitioning techniques operate by dividing space into a number of regions that can be quickly tested against each object. Two types of spatial partitioning will be considered: grids and trees. The grid-based algorithms consist of a spatial partitioning processing by dividing space into regions and testing if objects overlap the same region of space. And this reduces the number of pairwise to test. The tree-based algorithms use a tree structure where each node spans a particular space area. This reduces the pairwise checking cost because only tree leaves are checked. The spatial sorting based algorithm consists of a sorted spatial ordering of objects. Axis-Aligned Bounding Boxes (AABBs) are projected onto x, y and z axes and put into sorted lists. By sorting projection onto axes, two objects collide if and only if they collide on the three axes. This axis sorting reduces the number of pairwise to tested by reducing the number of tests to perform to only pairs which collide on at least one axis. For this study, ten different Broad-Phase collision detection algorithms or framework have been considered. The Bullet [6], CGAL [10, 11] frameworks have been used. Concerning the implemented algorithms most of them come from papers or given implementation.

Full Text
See detailXDEM: from HPC to the Cloud
Besseron, Xavier

Scientific Conference (2017, January)

Full Text
See detailNumerical study of the influence of particle size and packing on pyrolysis products using XDEM
Mahmoudi, Amir Houshang; Hoffmann, F.; Peters, Bernhard; Besseron, Xavier

in International Communications in Heat & Mass Transfer (2016), 71

Conversion of biomass as a renewable source of energy is one of the most challenging topics in industry and academy. Numerical models may help designers to understand better the details of the involved processes within the reactor, to improve process control and to increase the efficiency of the boilers. In this work, XDEM as an Euler-Lagrange model is used to predict the heat-up, drying and pyrolysis of biomass in a packed bed of spherical biomass particles. The fluid flow through the void space of a packed bed (which is formed by solid particles) is modeled as three-dimensional flow through a porous media using a continuous approach. The solid phase forming the packed bed is represented by individual, discrete particles which are described by a Lagrangian approach. On the particle level, distributions of temperature and species within a single particle are accounted for by a system of one-dimensional and transient conservation equations. The model is compared to four sets of experimental data from independent research groups. Good agreements with all experimental data are achieved, proving reliability of the used numerical methodology. The proposed model is used to investigate the impact of particle size in combination with particle packing on the char production. For this purpose, three setups of packed beds differing in particle size and packing mode are studied under the same process conditions. The predicted results show that arranging the packed bed in layers of small and large particles may increase the final average char yield for the entire bed by 46 %. © 2015 Elsevier B.V.

Full Text
See detailModeling of the biomass combustion on a forward acting grate using XDEM
Mahmoudi, Amir Houshang; Besseron, Xavier; Hoffmann, F.; Markovic, M.; Peters, Bernhard

in Chemical Engineering Science (2016), 142

The grate firing system is one of the most common ways for the combustion of biomass because it is able to burn a broad range of fuels with only little or even no requirement for fuel preparation. In order to improve the fuel combustion efficiency, it is important to understand the details of the thermochemical process in such furnaces. However, the process is very complex due to many involved physical and chemical phenomena such as drying, pyrolysis, char combustion, gas phase reaction, two phase flow and many more. The main objective of this work is to study precisely the involved processes in biomass combustion on a forward acting grate and provide a detailed insight into the local and global conversion phenomena. For this purpose, XDEM as an Euler-Lagrange model is used, in which the fluid phase is a continuous phase and each particle is tracked with a Lagrangian approach. The model has been compared with experimental data. Very good agreements between simulation and measurement have been achieved, proving the ability of the model to predict the biomass combustion under study on the grate. © 2015 Elsevier Ltd.

Full Text
See detailHPC or the Cloud: a cost study over an XDEM Simulation
Emeras, Joseph; Besseron, Xavier; Varrette, Sébastien; Bouvry, Pascal; Peters, Bernhard

in Proc. of the 7th International Supercomputing Conference in Mexico (ISUM 2016) (2016)

Full Text
See detailUL HPC in practice: why, what, how, where to look
Besseron, Xavier

Presentation (2015, June 25)

Full Text
See detailXDEM: eXtended Discrete Element Method
Besseron, Xavier; Peters, Bernhard

Presentation (2015, June 16)

Full Text
See detailPerformance Evaluation of the XDEM framework on the OpenStack Cloud Computing Middleware
Besseron, Xavier; Plugaru, Valentin; Mahmoudi, Amir Houshang; Varrette, Sébastien; Peters, Bernhard; Bouvry, Pascal

in Proceedings of the Fourth International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering (2015, February)

As Cloud Computing services become ever more prominent, it appears necessary to assess the efficiency of these solutions. This paper presents a performance evaluation of the OpenStack Cloud Computing middleware using our XDEM application simulating the pyrolysis of biomass as a benchmark. We propose a systematic study based on a fully automated benchmarking framework to evaluate 3 different configurations: Native (i.e. no virtualization), OpenStack with KVM and XEN hypervisors. Our approach features the following advantages: real user application, the fair comparison using the same hardware, the large scale distributed execution, while fully automated and reproducible. Experiments has been run on two different clusters, using up to 432 cores. Results show a moderate overhead for sequential execution and a significant penalty for distributed execution under the Cloud middleware. The overhead on multiple nodes is between 10% and 30% for OpenStack/KVM and 30% and 60% for OpenStack/XEN.

Full Text
See detailA discrete/continuous numerical approach to multi-physics
Peters, Bernhard; Besseron, Xavier; Estupinan Donoso, Alvaro Antonio; Mahmoudi, Amir Houshang; Mohseni, Seyedmohammad

in IFAC-PapersOnLine (2015), 28(1), 645-650

A variety of technical applications are not only the physics of a single domain, but include several physical phenomena, and therefore are referred to as multi-physics. As long as the phenomena being taken into account is either continuous or discrete i.e. Euler or Lagrangian a homogeneous solution concept can be employed. However, numerous challenges in engineering include continuous and discrete phase simultaneously, and therefore cannot be solved only by continuous or discrete approaches. Problems include both a continuous and a discrete phase are important in applications of the pharmaceutical Industry e.g. drug production, agriculture and food processing industry, mining, construction and Agricultural machinery, metal production, power generation and systems biology. The Extended Discrete Element Method (XDEM) is a novel technique, which provides a significant advance for the coupled discrete and continuous numerical simulation concepts. It expands the dynamics of particles as described by the classical discrete element method (DEM) by a thermodynamic state or stress/strain coupled as fluid flow or structures for each particle in a continuum phase. XDEM additionally estimates properties such as the interior temperature and/or species distribution. These predictive capabilities are extended to fluid flow through an interaction by heat, mass and momentum transfer important for process engineering. © 2015, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

Full Text
See detailAssessing Heat Transfer Through Walls Of Packed Bed Reactors By An Innovative Particle-Resolved Approach
Peters, Bernhard; Singhal, A.; Besseron, Xavier; Estupinan, A.; Mahmoudi, Amir Houshang; Mohseni, M.

in 18th IFRF Member's Conference (2015)

Full Text
See detailParaMASK: a Multi-Agent System for the Efficient and Dynamic Adaptation of HPC Workloads
Guzek, Mateusz; Besseron, Xavier; Varrette, Sébastien; Danoy, Grégoire; Bouvry, Pascal

in 14th IEEE International Symposium on Signal Processing and Information Technology (ISSPIT 2014) (2014, December)

Full Text
See detailHPC Performance and Energy-Efficiency of the OpenStack Cloud Middleware
Varrette, Sébastien; Plugaru, Valentin; Guzek, Mateusz; Besseron, Xavier; Bouvry, Pascal

in Proc. of the 43rd Intl. Conf. on Parallel Processing (ICPP-2014), Heterogeneous and Unconventional Cluster Architectures and Applications Workshop (HUCAA'14) (2014, September)

Full Text
See detailXDEM Research on UL HPC platform
Besseron, Xavier

Presentation (2014, May 07)

Full Text
See detailATIS: Automated Testing of Installed Software. Or so far, How to validate MPI stacks of an HPC cluster?
Besseron, Xavier

Presentation (2014, February 01)

Automatic Testing of Installed Software is a testing framework to validate the various flavors of software installed on an HPC site. It is composed of a set of unit tests, a runtime and a result-gathering dashboard. These tests are user-oriented as they assess the basic features that a general user expect to work on an HPC platform. Currently, it only focuses on generic MPI functionality as it is one complex and critical component of an HPC platform, but it will be extended to compilers, libraries and performance validation and regression in the future. HPC centers tend to provide a wide choice a software. Different users requires different software, but also different versions of the same software. Combined with the different compilers, MPI stacks, library dependencies, there is an explosion of software flavors installed on an HPC site. Tools already exist to help managing this large variety of software. Users can choose their software through the software list using the 'module' system. Administrators can perform automatic compilation and installation of software using EasyBuild. Additionally, software also require some customizations on some HPC sites. Thus, software flavors need to be validated after installation to check they're working as expected by the users. We developed and provide a set of unit tests together with a runtime and result-gathering framework to perform a such Automatic Testing of Installed Software. These tests take the side of the users in order to test any basic feature that a general user expect to work on an HPC platform. So far, the proposed tests only focus on generic MPI functionality as it is one complex and critical component of an HPC platform. The unit tests include, for example, compilation with mpicc and distributed execution with mpiexec. It has been applied successfully on the HPC platforms of the University of Luxembourg to assess builds of OpenMPI, MPICH, MVAPICH2 and IntelMPI generated with EasyBuild. In the future, we consider extending our unit tests to validate more components like compilers, libraries, toolchains and even applications. Another future direction is to consider performance validation and regression.

Full Text
See detailThe extended discrete element method (XDEM) applied to drying of a packed bed
Peters, Bernhard; Besseron, Xavier; Estupinan Donoso, Alvaro Antonio; Hoffmann, Florian; Michael, Mark; Mahmoudi, Amir Houshang

in Industrial Combustion (2014), 14

A vast number of engineering applications involve physics not solely of a single domain but of several physical phenomena, and therefore are referred to as multi-physical. As long as the phenomena considered are to be treated by either a continuous (i.e. Eulerian) or discrete (i.e. Lagrangian) approach, numerical solution methods may be employed to solve the problem. However, numerous challenges in engineering exist and evolve; those include modelling a continuous and discrete phase simultaneously, which cannot be solved accurately by continuous or discrete approaches only. Problems that involve both a continuous and a discrete phase are important in applications as diverse as the pharmaceutical industry, the food processing industry, mining, construction, agricultural machinery, metals manufacturing, energy production and systems biology. A novel technique referred to as Extended Discrete Element Method (XDEM) has been developed that offers a significant advancement for coupled discrete and continuous numerical simulation concepts. XDEM extends the dynamics of granular materials or particles as described through the classical discrete element method (DEM) to include additional properties such as the thermodynamic state or stress/strain for each particle coupled to a continuous phase such as a fluid flow or a solid structure. Contrary to a continuum mechanics concept, XDEM aims at resolving the particulate phase through the various processes attached to particles. While DEM predicts the spatial-temporal position and orientation for each particle, XDEM additionally estimates properties such as the internal temperature and/or species distribution during drying, pyrolysis or combustion of solid fuel material such as biomass in a packed bed. These predictive capabilities are further extended by an interaction with fluid flow by heat, mass and momentum transfer and the impact of particles on structures. © International Flame Research Foundation, 2014.

Full Text
See detailBeschreibung des Transportes von Schwimmkörpern bei Flutwasser über eine Euler-Lagrange Kopplung
Peters, Bernhard; Besseron, Xavier; Estupinan Donoso, Alvaro Antonio; Hoffmann, Florian; Michael, Mark; Mahmoudi, Amir Houshang; Vogel, Franck

in Dresdner Wasserbauliche Mitteilungen (2014)

Full Text
See detailAn Integral Approach to Multi-physics Application for Packed Bed Reactors
Peters, Bernhard; Besseron, Xavier; Estupinan, A.; Hoffmann, F.; Michael, M.; Mahmoudi, Amir Houshang; Mohseni, M.

in 24th European Symposium on Computer Aided Process Engineering, ESCAPE 24 (2014)

Full Text
See detailScale-Resolved Prediction of Pyrolysis in a Packed Bed by the Extended Discrete Element Method (XDEM)
Peters, Bernhard; Besseron, Xavier; Estupinan, A.; Hoffmann, F.; Michael, M. And Mahmoudi; Mohseni, M.

in The Ninth International Conference on Engineering Computational Technology (2014)

Full Text
See detailHPC Performance and Energy-Efficiency of Xen, KVM and VMware Hypervisors
Varrette, Sébastien; Guzek, Mateusz; Plugaru, Valentin; Besseron, Xavier; Bouvry, Pascal

in Proc. of the 25th Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2013) (2013, October)

With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting green approaches with higher energy efficiency. In particular, virtualization is emerging as the prominent approach to mutualize the energy consumed by a single server running multiple VMs instances. Even today, it remains unclear whether the overhead induced by virtualization and the corresponding hypervisor middleware suits an environment as high-demanding as an HPC platform. In this paper, we analyze from an HPC perspective the three most widespread virtualization frameworks, namely Xen, KVM, and VMware ESXi and compare them with a baseline environment running in native mode. We performed our experiments on the Grid’5000 platform by measuring the results of the reference HPL benchmark. Power measures were also performed in parallel to quantify the potential energy efficiency of the virtualized environments. In general, our study offers novel incentives toward in-house HPC platforms running without any virtualized frameworks.

Full Text
See detailUnified Design for Parallel Execution of Coupled Simulations using the Discrete Particle Method
Besseron, Xavier; Hoffmann, Florian; Michael, Mark; Peters, Bernhard

in Proceedings of the Third International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering (2013)

This paper presents the enhanced design of the Discrete Particle Method (DPM), a simulation tool which provides high quality and fast simulations to solve a broad range industrial processes involving granular materials. It enables to resolve mechanical and thermodynamics problems through different simulation modules (motions, chemical conversion). This new design allows to transparently couple the simulation modules in parallel execution. It relies on a unified interface and timebase of the simulation modules and a flexible decomposition in cells of the simulation space. Experimental results study the behavior of the Orthogonal Recursive Bisection (ORB) partitioning algorithm. A good scalability is achieved as the parallel execution on a distributed platform provides a 17-times speedup using 64 processes.

Full Text
See detailUsing Data-flow analysis in MAS for power-aware HPC runs
Varrette, Sébastien; Danoy, Grégoire; Guzek, Mateusz; Besseron, Xavier; Bouvry, Pascal

in Proc. of the IEEE Intl. Conf. on High Performance Computing and Simulation (HPCS'13) (2013)

Full Text
See detailEnhanced Thermal Process Engineering by the Extended Discrete Element Method (XDEM)
Peters, Bernhard; Besseron, Xavier; Estupinan Donoso, Alvaro Antonio; Hoffmann, Florian; Michael, Mark; Mahmoudi, Amir Houshang

in Universal Journal of Engineering Science (2013), 1

A vast number of engineering applications <br />include a continuous and discrete phase simultaneously, <br />and therefore, cannot be solved accurately by continu- <br />ous or discrete approaches only. Problems that involve <br />both a continuous and a discrete phase are important <br />in applications as diverse as pharmaceutical industry <br />e.g. drug production, agriculture food and process- <br />ing industry, mining, construction and agricultural <br />machinery, metals manufacturing, energy production <br />and systems biology. A novel technique referred to as <br />Extended Discrete Element Method (XDEM) is devel- <br />oped, that o ers a signi cant advancement for coupled <br />discrete and continuous numerical simulation concepts. <br />The Extended Discrete Element Method extends the <br />dynamics of granular materials or particles as described <br />through the classical discrete element method (DEM) to <br />additional properties such as the thermodynamic state <br />or stress/strain for each particle coupled to a continuum <br />phase such as <br />uid <br />ow or solid structures. Contrary <br />to a continuum mechanics concept, XDEM aims at <br />resolving the particulate phase through the various <br />processes attached to particles. While DEM predicts <br />the spacial-temporal position and orientation for each <br />particle, XDEM additionally estimates properties such <br />as the internal temperature and/or species distribution. <br />These predictive capabilities are further extended by an <br />interaction to <br />uid <br />ow by heat, mass and momentum <br />transfer and impact of particles on structures.

Full Text
See detailApplication of the Extended Discrete Element Method (XDEM) in Computer-Aided Process Engineering
Peters, Bernhard; Besseron, Xavier; Estupinan Donoso, Alvaro Antonio; Hoffmann, Florian; Michael, Mark; Mahmoudi, Amir Houshang

Scientific Conference (2013)

Full Text
See detailDie Extended Discrete Element Method (XDEM) für multiphysikalische Anwendungen
Peters, Bernhard; Besseron, Xavier; Estupinan Donoso, Alvaro Antonio; Hoffmann, Florian; Michael, Mark; Mahmoudi, Amir Houshang; Dziugys, Algis; Vogel, Frank

Scientific Conference (2013)

A vast number of engineering applications include a continuous and discrete phase simultaneously, and therefore, cannot be solved accurately by continuous or discrete approaches only. Problems that involve both a continuous and a discrete phase are important in applications as diverse as pharmaceutical industry e.g. drug production, agriculture food and processing industry, mining, construction and agricultural machinery, metals manufacturing, energy production and systems biology. <br />A novel technique referred to as Extended Discrete Element Method (XDEM) is developed, that offers a significant advancement for coupled discrete and continuous numerical simulation concepts. XDEM treats the solid phase representing the particles and the fluidised phase usually a fluid phase or a structure as two distinguished phases that are coupled through heat, mass and momentum transfer. An outstanding feature of the numerical concept is that each particle is treated as an individual entity that is described by its thermodynamic state e.g. temperature and reaction progress and its position and orientation in time and space. The thermodynamic state includes one-dimensional and transient distributions of temperature and species within the particle and therefore, allows a detailed and accurate characterisation of the reaction progress in a fluidised bed. Thus, the proposed methodology provides a high degree of resolution ranging from scales within a particle to the continuum phase as global dimensions. <br />These superior features as compared to traditional and pure continuum mechanics approaches are applied to predict drying of wood particles in a packed bed and impact of particles on a membrane. Pre- heated air streamed through the packed bed, and thus, heated the particles with simultaneous evaporation of moisture. Water vapour is transferred into the gas phase at the surface of the particles and transported to the exit of the reactor. A rather inhomogeneous drying process in the upper part of the reactor with higher temperatures around the circumference of the inner reactor wall was observed. The latter is due to increased porosity in conjunction with higher mass flow rates than in the centre of the reactor, and thus, augmented heat transfer. A comparison of the weight loss over time agreed well with measurements. <br />Under the impact of falling particles the surface of a membrane deforms that conversely affects the motion of particles on the surface. Due to an increasing vertical deformation particles roll or slide down toward the bottom of the recess, where they are collected in a heap. Furthermore, during initial impacts deformation waves are predicted that propagate through the structure, and may, already indicate resonant effects already before a prototype is built. Hence, the Extended Discrete Element Method offers a high degree of resolution avoiding further empirical correlations and extends the knowledge into the underlying physics. Although most of the work load concerning CFD and FEM is arranged in the ANSYS workbench, a complete integration is intended that allows for a smooth workflow of the entire simulation environment.

Full Text
See detailMonitoring and Predicting Hardware Failures in HPC Clusters with FTB-IPMI
Rajachandrasekar, Raghunath; Besseron, Xavier; Panda, Dhabaleswar K.

in Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum (2012)

Fault-detection and prediction in HPC clusters and Cloud-computing systems are increasingly challenging issues. Several system middleware such as job schedulers and MPI implementations provide support for both reactive and proactive mechanisms to tolerate faults. These techniques rely on external components such as system logs and infrastructure monitors to provide information about hardware/software failure either through detection, or as a prediction. However, these middleware work in isolation, without disseminating the knowledge of faults encountered. In this context, we propose a light-weight multi-threaded service, namely FTB-IPMI, which provides distributed fault-monitoring using the Intelligent Platform Management Interface (IPMI) and coordinated propagation of fault information using the Fault-Tolerance Backplane (FTB). In essence, it serves as a middleman between system hardware and the software stack by translating raw hardware events to structured software events and delivering it to any interested component using a publish-subscribe framework. Fault-predictors and other decision-making engines that rely on distributed failure information can benefit from FTB-IPMI to facilitate proactive fault-tolerance mechanisms such as preemptive job migration. We have developed a fault-prediction engine within MVAPICH2, an RDMA-based MPI implementation, to demonstrate this capability. Failure predictions made by this engine are used to trigger migration of processes from failing nodes to healthy spare nodes, thereby providing resilience to the MPI application. Experimental evaluation clearly indicates that a single instance of FTB-IPMI can scale to several hundreds of nodes with a remarkably low resource-utilization footprint. A deployment of FTB-IPMI that services a cluster with 128 compute-nodes, sweeps the entire cluster and collects IPMI sensor information on CPU temperature, system voltages and fan speeds in about 0.75 seconds. The average CPU utilization of this service running on a single node is 0.35%.

Full Text
See detailCRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart
Ouyang, Xiangyong; Rajachandrasekar, Raghunath; Besseron, Xavier; Wang, Hao; Huang, Jian; Panda, Dhabaleswar K.

in 2011 International Conference on Parallel Processing (2011, September)

Full Text
See detailCan a Decentralized Metadata Service Layer benefit Parallel Filesystems?
Meshram, Vilobh; Besseron, Xavier; Ouyang, Xiangyong; Rajachandrasekar, Raghunath; Darbha, Ravi Prakash; Panda, Dhabaleswar K.

in 2011 IEEE International Conference on Cluster Computing (2011, September)

Full Text
See detailImpact of over-decomposition on coordinated checkpoint/rollback protocol
Besseron, Xavier; Gautier, Thierry

in Euro-Par 2011: Parallel Processing Workshops (2011, August)

Full Text
See detailCan Checkpoint/Restart Mechanisms Benefit from Hierarchical Data Staging?
Rajachandrasekar, Raghunath; Ouyang, Xiangyong; Besseron, Xavier; Meshram, Vilobh; Panda, Dhabaleswar K.

in Euro-Par 2011: Parallel Processing Workshops (2011, August)

Full Text
See detailHigh Performance Pipelined Process Migration with RDMA
Ouyang, Xiangyong; Rajachandrasekar, Raghunath; Besseron, Xavier; Panda, Dhabaleswar K.

in 2011 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (2011, May)

Full Text
See detailProactive Fault-Resilience with Process Migration in MVAPICH2: A demonstration with Tachyon
Ouyang, Xiangyong; Rajachandrasekar, Raghunath; Besseron, Xavier; Panda, Dhabaleswar K.

Presentation (2010, November)

Full Text
See detailKaapi / Charm++ preliminary comparison
Besseron, Xavier; Gautier, Thierry; Zheng, Gengbin; Kalé, Laxmikant V.

Presentation (2010, June)

Full Text
See detailTolérance aux fautes et reconfiguration dynamique pour les applications distribuées à grande échelle
Besseron, Xavier

Doctoral thesis (2010)

This work deals with high performance computing on large scale platforms like computing grids. Computing grids are characterized by (1) frequent changes in execution context and, especially, by (2) a high failure probability caused by the large number of components. Running an application efficiently in such an environment requires to consider these parameters. Our research work is based on the abstract representation of the application as a data flow graph from the parallel and distributed programming model Athapascan/Kaapi. This abstract representation is used to provide solutions for (1) dynamic reconfiguration and (2) fault tolerance issues. - First, we propose a dynamic reconfiguration mechanism that manages, transparently for the reconfiguration programmer, concurrent operations on the application state and mutual consistency of states for distributed reconfiguration. - Secondly, we present an original fault tolerance protocol that allows partial rollback of the application in case of failure. For this purpose, the set of strictly required computation tasks to recover is computed. These contributions are evaluated through the Kaapi and X-Kaapi software on the Grid'5000 computing platform.

Full Text
See detailFault tolerance for a data flow model
Besseron, Xavier

Presentation (2010, March)

Full Text
See detailFault tolerance and availability awareness in computational grids
Besseron, Xavier; Bouguerra, Slim; Gautier, Thierry; Saule, Érik; Trystram, Denis

in Fundamentals of Grid Computing: Theory, Algorithms and Technologies (2009)

Full Text
See detailX-Kaapi : Une nouvelle implémentation eXtrême du vol de travail
Besseron, Xavier; Laferriere, Christophe; Traore, Daouda; Gautier, Thierry

in Rencontres Francophones du Parallélisme (RenPar'19) (2009, September)

Full Text
See detailOptimized Coordinated Checkpoint/Rollback Protocol using a Dataflow Graph Model
Besseron, Xavier; Gautier, Thierry

Presentation (2009, January 22)

Fault-tolerance protocols play an important role in today long runtime scienti\ufb01c parallel applications. The probability of a failure may be important due to the number of unreliable components involved during an execution. We present our approach and preliminary results about a new checkpoint/rollback protocol based on a coordinated scheme. The application is described using a dataflow graph, which is an abstract representation of the execution. Thanks to this representation, the fault recovery in our protocol only requires a partial restart of other processes. Simulations on a domain decomposition application show that the amount of computations required to restart and the number of involved processes are reduced compared to the classical global rollback protocol.

Full Text
See detailOptimised recovery with a coordinated checkpoint/rollback protocol for domain decomposition applications
Besseron, Xavier; Gautier, Thierry

in Modelling, Computation and Optimization in Information Systems and Management Sciences. MCO 2008 (2008, September)

Full Text
See detailIV Grid Plugtests: composing dedicated tools to run an application efficiently on Grid'5000
Besseron, Xavier; Danjean, Vincent; Gautier, Thierry; Guelton, Serge; Huard, Guillaume; Wagner, Frédéric

Presentation (2008, February 12)

Exploiting efficiently the resources of whole Grid'5000 with the same application requires to solve several issues: 1) resources reservation; 2) application's processes deployment; 3) application's tasks scheduling. For the IV Grid Plugtests, we used a dedicated tool for each issue to solve. The N-Queens contest rules imposed ProActive for the resources reservations (issue 1). Issue 2 was solved using TakTuk which allows to deploy a large set of remote nodes. Deployed nodes take part in the deployment using an adaptive algorithm that makes it very efficient. For the 3rd issue, we wrote our application with Athapascan API whose model is based on the concepts of tasks and shared data. The application is described as a data-flow graph using the Shared and Fork keywords. This high level abstraction of hardware gives us an efficient execution with the Kaapi runtime engine using a work-stealing scheduling algorithm to balance the workload between all the distributed processes.

Full Text
See detailUn protocole de sauvegarde / reprise coordonné pour les applications à flot de données reconfigurables
Besseron, Xavier; Pigeon, Laurent; Gautier, Thierry; Jafar, Samir

in Technique et Science Informatiques (2007)

Full Text
See detailKaapi: A Thread Scheduling Runtime System for Data Flow Computations on Cluster of Multi-Processors
Gautier, Thierry; Besseron, Xavier; Pigeon, Laurent

in PASCO '07 Proceedings of the 2007 international workshop on Parallel symbolic computation (2007, July)

Full Text
See detailCCK : un protocole coordonné de sauvegarde/reprise pour la tolérance aux pannes des applications itératives en calcul numérique
Besseron, Xavier

Bachelor/master dissertation (2006)

Full Text
See detailCCK: An Improved Coordinated Checkpoint/Rollback Protocol for Dataflow Applications in Kaapi
Besseron, Xavier; Jafar, Samir; Gautier, Thierry; Roch, Jean Louis

in 2006 2nd International Conference on Information & Communication Technologies (2006, April)