Results 41-60 of 156.
Bookmark and Share    
Full Text
See detailLarge-scale research data management: Road to GDPR compliance
Bouvry, Pascal UL; Varrette, Sébastien UL; Plugaru, Valentin UL et al

Presentation (2018, April)

Detailed reference viewed: 151 (12 UL)
Full Text
See detailTutorial Big Data Analytics: Overview and Practical Examples
Varrette, Sébastien UL

Report (2018)

This tutorial will offer a synthetic view of Big Data Analytics challenges, the tools permitting to address these challenges and focus on one of these tool through a practical session with a set of ... [more ▼]

This tutorial will offer a synthetic view of Big Data Analytics challenges, the tools permitting to address these challenges and focus on one of these tool through a practical session with a set of concrete examples. Level: beginner - advanced [less ▲]

Detailed reference viewed: 79 (0 UL)
Full Text
Peer Reviewed
See detailPRESENCE: Toward a Novel Approach for Performance Evaluation of Mobile Cloud SaaS Web Services
Ibrahim, Abdallah Ali Zainelabden Abdallah UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 32nd IEEE Intl. Conf. on Information Networking (ICOIN 2018) (2018, January)

Detailed reference viewed: 123 (3 UL)
Full Text
Peer Reviewed
See detailComparing Broad-Phase Interaction Detection Algorithms for Multiphysics DEM Applications
Rousset, Alban UL; Mainassara Chekaraou, Abdoul Wahid UL; Liao, Yu-Chung UL et al

in AIP Conference Proceedings ICNAAM 2017 (2017, September)

Collision detection is an ongoing source of research and optimization in many fields including video-games and numerical simulations [6, 7, 8]. The goal of collision detection is to report a geometric ... [more ▼]

Collision detection is an ongoing source of research and optimization in many fields including video-games and numerical simulations [6, 7, 8]. The goal of collision detection is to report a geometric contact when it is about to occur or has actually occurred. Unfortunately, detailed and exact collision detection for large amounts of objects represent an immense amount of computations, naively n 2 operation with n being the number of objects [9]. To avoid and reduce these expensive computations, the collision detection is decomposed in two phases as it shown on Figure 1: the Broad-Phase and the Narrow-Phase. In this paper, we focus on Broad-Phase algorithm in a large dynamic three-dimensional environment. We studied two kinds of Broad-Phase algorithms: spatial partitioning and spatial sorting. Spatial partitioning techniques operate by dividing space into a number of regions that can be quickly tested against each object. Two types of spatial partitioning will be considered: grids and trees. The grid-based algorithms consist of a spatial partitioning processing by dividing space into regions and testing if objects overlap the same region of space. And this reduces the number of pairwise to test. The tree-based algorithms use a tree structure where each node spans a particular space area. This reduces the pairwise checking cost because only tree leaves are checked. The spatial sorting based algorithm consists of a sorted spatial ordering of objects. Axis-Aligned Bounding Boxes (AABBs) are projected onto x, y and z axes and put into sorted lists. By sorting projection onto axes, two objects collide if and only if they collide on the three axes. This axis sorting reduces the number of pairwise to tested by reducing the number of tests to perform to only pairs which collide on at least one axis. For this study, ten different Broad-Phase collision detection algorithms or framework have been considered. The Bullet [6], CGAL [10, 11] frameworks have been used. Concerning the implemented algorithms most of them come from papers or given implementation. [less ▲]

Detailed reference viewed: 311 (58 UL)
Full Text
See detailTutorial Reproducible Research at the Cloud Era: Overview, Hands-on and Open challenges
Varrette, Sébastien UL

Learning material (2016)

The term Reproducible Research (RR) refers to “the idea that the ultimate product of academic research is the paper along with the full computational environment used to produce the results in the paper ... [more ▼]

The term Reproducible Research (RR) refers to “the idea that the ultimate product of academic research is the paper along with the full computational environment used to produce the results in the paper such as the code, data, etc. that can be used to reproduce the results and create new work based on the research.” Source: Wikipedia. The need for reproducibility is increasing dramatically as data analyses become more complex, involving larger datasets and more sophisticated computations. Obviously, the advent of the Cloud Computing paradigm is expected to provide the appropriate means for RR. This tutorial is meant to provide an overview of sensible tools every researcher (in computer science but not only) should be aware of to enable RR in its own work. In particular, and after a general talk presenting RR and the existing associated tools and workflow, this tutorial will propose several practical exercises and hands-on meant to be performed on each attendee’s laptop, to cover the management of sharable Development environment using Vagrant. Resources of this tutorial will be available on Github. [less ▲]

Detailed reference viewed: 42 (1 UL)
Full Text
See detailProceedings of the 8th IEEE International Conference on Cloud Computing Technology and Science (ClouCom 2016)
Varrette, Sébastien UL; Bouvry, Pascal UL; Zomaya, Albert et al

Book published by IEEE Computer Society (2016)

CloudCom is the premier conference on Cloud Computing worldwide, attracting researchers, developers, users, students and practitioners from the fields of big data, systems architecture, services research ... [more ▼]

CloudCom is the premier conference on Cloud Computing worldwide, attracting researchers, developers, users, students and practitioners from the fields of big data, systems architecture, services research, virtualization, security and privacy, high performance computing, always with an emphasis on how to build cloud computing platforms with real impact. The conference is co-sponsored by the Institute of Electrical and Electronics Engineers (IEEE), is steered by the Cloud Computing Association, and draws on the excellence of its world-class Program Committee and its participants. The 8th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2016) was held in the city of Luxembourg on 12-15 December 2016 [less ▲]

Detailed reference viewed: 116 (9 UL)
Full Text
Peer Reviewed
See detailAmazon Elastic Compute Cloud (EC2) vs. in-House HPC Platform: a Cost Analysis
Emeras, Joseph UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 9th IEEE Intl. Conf. on Cloud Computing (CLOUD 2016) (2016, June)

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as THE solution to most IT problems. While High Performance Computing (HPC) centers continuously ... [more ▼]

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as THE solution to most IT problems. While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, several voices (most probably commercial ones) emit the wish that CC platforms could also serve HPC needs and eventually replace in-house HPC platforms. If we exclude the pure performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when submitted to an High Performance Computing (HPC) workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by evaluating the Total Cost of Ownership (TCO) of the in- house HPC facility we operate since 2007 within the University of Luxembourg (UL), and compare it with the investment that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is two-fold. First we propose a theoretical price - performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on our own cluster TCO and taking into account all the Operating Expense (OPEX), we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balance the common intuition in favor of Cloud Computing (CC) platforms, would they be provided by the reference Cloud provider worldwide. [less ▲]

Detailed reference viewed: 361 (18 UL)
Full Text
Peer Reviewed
See detailReducing Efficiency of Connectivity-Splitting Attack on Newscast via Limited Gossip
Muszynski, Jakub UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 19th European Event on Bio-Inspired Computation, EvoCOMNET 2016 (2016, March)

Newscast is aPeer-to-Peer, nature-inspired gossip-based data exchange protocol used for information dissemination and membership management in large-scale, agent-based distributed systems. The model ... [more ▼]

Newscast is aPeer-to-Peer, nature-inspired gossip-based data exchange protocol used for information dissemination and membership management in large-scale, agent-based distributed systems. The model follows a probabilistic scheme able to keep a self-organised, small-world equilibrium featuring a complex, spatially structured and dynamically changing environment. Newscast gained popularity since the early 2000s thanks to its inherent resilience to node volatility as the protocol exhibits strong self-healing properties. However, the original design proved to be surprisingly fragile in a byzantine environment subjected to cheating faults. Indeed, a set of recent studies emphasized the hard-wired vulnerabilities of the protocol, leading to an efficient implementation of a malicious client, where a few naive cheaters are able to break the network connectivity in a very short time. Extending these previous works, we propose in this paper a modification of the seminal protocol with embedded counter-measures, improving the resilience of the scheme against malicious acts without significantly affecting the original Newscast’s proper- ties nor its inherent performance. Concrete experiments were performed to support these claims, using a framework implementing all the solutions discussed in this work. [less ▲]

Detailed reference viewed: 162 (1 UL)
Full Text
Peer Reviewed
See detailHPC or the Cloud: a cost study over an XDEM Simulation
Emeras, Joseph; Besseron, Xavier UL; Varrette, Sébastien UL et al

in Proc. of the 7th International Supercomputing Conference in Mexico (ISUM 2016) (2016)

Detailed reference viewed: 200 (15 UL)
Full Text
Peer Reviewed
See detailAn LLVM-based Approach to Generate Energy Aware Code by means of MOEAs
Varrette, Sébastien UL; Dorronsorro, Bernabe; Bouvry, Pascal UL

in Proc. of the 7th European Symposium on Computational Intelligence and Mathematics.(ESCIM 2015) (2015, October)

Moderating the energy consumption and building eco-friendly computing infrastructure is of major concerns in the implementation of High Performance Computing (HPC) system, especially when a world- wide ... [more ▼]

Moderating the energy consumption and building eco-friendly computing infrastructure is of major concerns in the implementation of High Performance Computing (HPC) system, especially when a world- wide effort target the production of an Exaflop machine by 2020 within a power envelop of 20 MW. Tracking energy savings can be done at var- ious levels and in this paper, we investigate the automatic generation of energy aware software with the ambition to keep the same level of efficiency, testability, scalability and security. To this end, the Evo-LLVM framework is proposed. Based on the mod- ular LLVM Compiler Infrastructure and exploiting various evolutionary heuristics, our scheme is designed to optimize for a given input source code (written in C) the sequence of LLVM transformations that should be applied to the source code to improve its energy efficiency without degrading its other performance attributes (execution time, parallel or distributed scalability). Measuring this capacity is based on the combi- nation of several metrics optimized simultaneously with Multi-Objective Evolutionary Algorithms (MOEAs). In this position paper, the NSGA- II algorithm is implemented within the Evo-LLVM yet the analysis of more advanced heuristics is in progress. In all cases, the experimental validation of the framework over a pedagogical code sample reveal a drastic improvement of the energy consumed during the execution while maintaining (or even improving) the average execution time. [less ▲]

Detailed reference viewed: 181 (15 UL)
Full Text
See detailHPC Performance and Energy Efficiency: Overview and Trends
Varrette, Sébastien UL

Speeches/Talks (2015)

Detailed reference viewed: 34 (0 UL)
See detailIntroduction to Git and Vagrant
Varrette, Sébastien UL

Learning material (2015)

Detailed reference viewed: 34 (0 UL)
Full Text
Peer Reviewed
See detailEvalix: Classification and Prediction of Job Resource Consumption on HPC Platforms
Emeras, Joseph UL; Varrette, Sébastien UL; Guzek, Mateusz UL et al

in Proc. of the 19th Intl. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP'15), part of the 29th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2015) (2015, May)

At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle ... [more ▼]

At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle unveils the job prediction component of the Evalix project. This framework aims at an improved efficiency of the underlying Resource and Job Management System (RJMS) within heterogeneous HPC facil- ities by the automatic evaluation and characterization of the submitted workload. The objective is not only to better adapt the scheduled jobs to the available resource capabilities, but also to reduce the energy costs. For that purpose, we collected the resource consumption of all the jobs executed on a production cluster for a period of three months. Based on the analysis then on the classification of the jobs, we computed a resource consumption model. The objective is to train a set of predictors based on the aforementioned model, that will give the estimated CPU, mem- ory and IO used by the jobs. The analysis of the resource consumption highlighted that different classes of jobs have different kinds of resource needs and the classification of the jobs enabled to characterize several application patterns of the users. We also discovered that several users whose resource usage on the cluster is considered as too low, are respon- sible for a loss of CPU time on the order of five years over the considered three month period. The predictors, trained from a supervised learning algorithm, were able to correctly classify a large set of data. We evalu- ated them with three performance indicators that gave an information retrieval rate of 71% to 89% and a probability of accurate prediction be- tween 0.7 and 0.8. The results of this work will be particularly helpful for designing an optimal partitioning of the considered heterogeneous plat- form, taking into consideration the real application needs and thus lead- ing to energy savings and performance improvements. Moreover, apart from the novelty of the contribution, the accurate classification scheme offers new insights of users behavior of interest for the design of future HPC platforms. [less ▲]

Detailed reference viewed: 258 (17 UL)
Full Text
Peer Reviewed
See detailDistributed Cellular Evolutionary Algorithms in a Byzantine Environment
Muszynski, Jakub UL; Varrette, Sébastien UL; Dorronsorro, Bernabé et al

in Proc. of the 18th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2015), part of the 29th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2015) (2015, May)

Distributed parallel computing platforms contribute for a large part to some of the most powerful computers. Such architec- tures are typically based on accelerators (General Purpose com- puting on ... [more ▼]

Distributed parallel computing platforms contribute for a large part to some of the most powerful computers. Such architec- tures are typically based on accelerators (General Purpose com- puting on Graphics Processing Units, Many Integrated Cores e.g Xeon Phi co-processors) and/or a large number of interconnected computing nodes. Obviously, they raise new challenges, typically in terms of scalability, robustness, adaptability and security. At the advent of the quest for Ultrascale Computing Systems, this paper addresses the issue of fault tolerance toward Byzantine failures overs such platforms. Indeed, the inherent unpredictable nature of these errors render their detection, not speaking of their correction, hard or even impossible to perform at large-scale. At this level, Algorithm-Based Fault Tolerance (ABFT) techniques where the fault tolerance scheme is tailored to the algorithm performed, seems the most promising approaches to deal with such failures. In this context, Evolutionary Algorithms (EAs), especially panmictic global parallel EAs, exhibit a remarkable resilience against byzantine failures modeled as cheating faults as demonstrated either empirically or theoretically in previous studies [1], [2]. In this paper, we extend this analysis to the case of distributed EAs based on the cellular model leading to distributed Cellular Evolutionary Algorithms (dCEAs). Our empirical study over a set or reference optimization problem confirm the ABFT nature of dCEAs. To our knowledge, this is the first study of dCEAs under the perspective of cheating issues and crash faults in a domain of distributed computations, thus opening new insights and perspectives for the design of competitive ultra-scale system based on evolutionary programming models. [less ▲]

Detailed reference viewed: 150 (2 UL)