Results 61-80 of 162.
Bookmark and Share    
Full Text
See detailHPC Performance and Energy Efficiency: Overview and Trends
Varrette, Sébastien UL

Speeches/Talks (2015)

Detailed reference viewed: 38 (1 UL)
See detailIntroduction to Git and Vagrant
Varrette, Sébastien UL

Learning material (2015)

Detailed reference viewed: 37 (1 UL)
Full Text
Peer Reviewed
See detailDistributed Cellular Evolutionary Algorithms in a Byzantine Environment
Muszynski, Jakub UL; Varrette, Sébastien UL; Dorronsorro, Bernabé et al

in Proc. of the 18th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2015), part of the 29th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2015) (2015, May)

Distributed parallel computing platforms contribute for a large part to some of the most powerful computers. Such architec- tures are typically based on accelerators (General Purpose com- puting on ... [more ▼]

Distributed parallel computing platforms contribute for a large part to some of the most powerful computers. Such architec- tures are typically based on accelerators (General Purpose com- puting on Graphics Processing Units, Many Integrated Cores e.g Xeon Phi co-processors) and/or a large number of interconnected computing nodes. Obviously, they raise new challenges, typically in terms of scalability, robustness, adaptability and security. At the advent of the quest for Ultrascale Computing Systems, this paper addresses the issue of fault tolerance toward Byzantine failures overs such platforms. Indeed, the inherent unpredictable nature of these errors render their detection, not speaking of their correction, hard or even impossible to perform at large-scale. At this level, Algorithm-Based Fault Tolerance (ABFT) techniques where the fault tolerance scheme is tailored to the algorithm performed, seems the most promising approaches to deal with such failures. In this context, Evolutionary Algorithms (EAs), especially panmictic global parallel EAs, exhibit a remarkable resilience against byzantine failures modeled as cheating faults as demonstrated either empirically or theoretically in previous studies [1], [2]. In this paper, we extend this analysis to the case of distributed EAs based on the cellular model leading to distributed Cellular Evolutionary Algorithms (dCEAs). Our empirical study over a set or reference optimization problem confirm the ABFT nature of dCEAs. To our knowledge, this is the first study of dCEAs under the perspective of cheating issues and crash faults in a domain of distributed computations, thus opening new insights and perspectives for the design of competitive ultra-scale system based on evolutionary programming models. [less ▲]

Detailed reference viewed: 152 (2 UL)
Full Text
Peer Reviewed
See detailEvalix: Classification and Prediction of Job Resource Consumption on HPC Platforms
Emeras, Joseph UL; Varrette, Sébastien UL; Guzek, Mateusz UL et al

in Proc. of the 19th Intl. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP'15), part of the 29th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2015) (2015, May)

At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle ... [more ▼]

At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle unveils the job prediction component of the Evalix project. This framework aims at an improved efficiency of the underlying Resource and Job Management System (RJMS) within heterogeneous HPC facil- ities by the automatic evaluation and characterization of the submitted workload. The objective is not only to better adapt the scheduled jobs to the available resource capabilities, but also to reduce the energy costs. For that purpose, we collected the resource consumption of all the jobs executed on a production cluster for a period of three months. Based on the analysis then on the classification of the jobs, we computed a resource consumption model. The objective is to train a set of predictors based on the aforementioned model, that will give the estimated CPU, mem- ory and IO used by the jobs. The analysis of the resource consumption highlighted that different classes of jobs have different kinds of resource needs and the classification of the jobs enabled to characterize several application patterns of the users. We also discovered that several users whose resource usage on the cluster is considered as too low, are respon- sible for a loss of CPU time on the order of five years over the considered three month period. The predictors, trained from a supervised learning algorithm, were able to correctly classify a large set of data. We evalu- ated them with three performance indicators that gave an information retrieval rate of 71% to 89% and a probability of accurate prediction be- tween 0.7 and 0.8. The results of this work will be particularly helpful for designing an optimal partitioning of the considered heterogeneous plat- form, taking into consideration the real application needs and thus lead- ing to energy savings and performance improvements. Moreover, apart from the novelty of the contribution, the accurate classification scheme offers new insights of users behavior of interest for the design of future HPC platforms. [less ▲]

Detailed reference viewed: 262 (17 UL)
Peer Reviewed
See detailEnergy efficiency in HPC Data Centers: Latest Advances to Build the Path to Exascale
Varrette, Sébastien UL; Bouvry, Pascal UL; Jarus, M. et al

in Handbook on Data Centers (2015)

Detailed reference viewed: 161 (3 UL)
Peer Reviewed
See detailEnergy Efficiency and High-Performance Computing
Bouvry, Pascal UL; Chetsa, G. L. T.; Costa, G. Da et al

in Pierson, J.-M. (Ed.) Large-scale Distributed Systems and Energy Efficiency: A Holistic View (2015)

Detailed reference viewed: 176 (5 UL)
Full Text
See detailFoundations of Coding: Compression, Encryption, Error-Correction
Dumas, J.-G.; Roch, J.-L.; Tannier, E. et al

Book published by Wiley Sons (2015)

This book offers a comprehensive introduction to the fundamental structures and applications of a wide range of contemporary coding operations This book offers a comprehensive introduction to the ... [more ▼]

This book offers a comprehensive introduction to the fundamental structures and applications of a wide range of contemporary coding operations This book offers a comprehensive introduction to the fundamental structures and applications of a wide range of contemporary coding operations. This text focuses on the ways to structure information so that its transmission will be in the safest, quickest, and most efficient and error-free manner possible. All coding operations are covered in a single framework, with initial chapters addressing early mathematical models and algorithmic developments which led to the structure of code. After discussing the general foundations of code, chapters proceed to cover individual topics such as notions of compression, cryptography, detection, and correction codes. Both classical coding theories and the most cutting-edge models are addressed, along with helpful exercises of varying complexities to enhance comprehension. Explains how to structure coding information so that its transmission is safe, error-free, efficient, and fast Includes a pseudo-code that readers may implement in their preferential programming language Features descriptive diagrams and illustrations, and almost 150 exercises, with corrections, of varying complexity to enhance comprehension Foundations of Coding: Compression, Encryption, Error-Correction is an invaluable resource for understanding the various ways information is structured for its secure and reliable transmission in the 21st-century world. [less ▲]

Detailed reference viewed: 88 (1 UL)
Full Text
Peer Reviewed
See detailPerformance Evaluation of the XDEM framework on the OpenStack Cloud Computing Middleware
Besseron, Xavier UL; Plugaru, Valentin UL; Mahmoudi, Amir Houshang UL et al

in Proceedings of the Fourth International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering (2015, February)

As Cloud Computing services become ever more prominent, it appears necessary to assess the efficiency of these solutions. This paper presents a performance evaluation of the OpenStack Cloud Computing ... [more ▼]

As Cloud Computing services become ever more prominent, it appears necessary to assess the efficiency of these solutions. This paper presents a performance evaluation of the OpenStack Cloud Computing middleware using our XDEM application simulating the pyrolysis of biomass as a benchmark. We propose a systematic study based on a fully automated benchmarking framework to evaluate 3 different configurations: Native (i.e. no virtualization), OpenStack with KVM and XEN hypervisors. Our approach features the following advantages: real user application, the fair comparison using the same hardware, the large scale distributed execution, while fully automated and reproducible. Experiments has been run on two different clusters, using up to 432 cores. Results show a moderate overhead for sequential execution and a significant penalty for distributed execution under the Cloud middleware. The overhead on multiple nodes is between 10% and 30% for OpenStack/KVM and 30% and 60% for OpenStack/XEN. [less ▲]

Detailed reference viewed: 391 (51 UL)
Full Text
Peer Reviewed
See detailResilience within Ultrascale Computing System: Challenges and Opportunities from Nesus Project
Bouvry, Pascal UL; Mayer, R.; Muszynski, Jakub UL et al

in Supercomputing Frontiers and Innovations (2015), 2(2), 46--63

Ultrascale computing is a new computing paradigm that comes naturally from the necessity of computing systems that should be able to handle massive data in possibly very large scale distributed systems ... [more ▼]

Ultrascale computing is a new computing paradigm that comes naturally from the necessity of computing systems that should be able to handle massive data in possibly very large scale distributed systems, enabling new forms of applications that can serve a very large amount of users and in a timely manner that we have never experienced before. However, besides the benefits, ultrascale computing systems do not come without challenges. One of the challenges is the resilience of ultrascale computing systems. Although resilience is already an established field in system science and many methodologies and approaches are available to deal with it, the unprecedented scales of computing, of the massive data to be managed, new network technologies, and drastically new forms of massive scale applications bring new challenges that need to be addressed. This paper reviews the challenges and approaches of resilience in ultrascale computing systems from multiple perspectives involving and addressing the resilience aspects of hardware-software co-design for ultrascale systems, resilience against (security) attacks, new approaches and methodologies to resilience in ultrascale systems, applications and case studies. [less ▲]

Detailed reference viewed: 162 (5 UL)
Full Text
Peer Reviewed
See detailParaMASK: a Multi-Agent System for the Efficient and Dynamic Adaptation of HPC Workloads
Guzek, Mateusz UL; Besseron, Xavier UL; Varrette, Sébastien UL et al

in 14th IEEE International Symposium on Signal Processing and Information Technology (ISSPIT 2014) (2014, December)

Detailed reference viewed: 239 (32 UL)
Full Text
Peer Reviewed
See detailPerformance Analysis of Cloud Environments on Top of Energy-Efficient Platforms Featuring Low Power Processors
Plugaru, Valentin UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 6th IEEE Intl. Conf. on Cloud Computing Technology and Science (CloudCom'14) (2014, December)

Energy efficiency remains a prevalent concern in the development of future HPC systems. Thus the next generations of supercomputers are foreseen to be developed as hybrid systems featuring traditional ... [more ▼]

Energy efficiency remains a prevalent concern in the development of future HPC systems. Thus the next generations of supercomputers are foreseen to be developed as hybrid systems featuring traditional processors, accelerators (such as GPGPUs) and/or low-power processor architectures (ARM, Intel Atom, etc.) primarily designed for the mobile and embedded de- vices market. Also, a confluence with the Cloud Computing (CC) paradigm is anticipated, driven by economic sustainability factors. However, the performance impact of running Cloud middleware on such crossbred platforms remains to be explored, especially on low power processors. In this context, this paper brings two main contributions: (1) the design and implementation of BACH, a framework able to execute automated performance evaluations of Cloud and HPC cluster environments; (2) the concrete validation of the framework for the evaluation of the modern OpenStack Infrastructure-as-a-Service (IaaS) middleware, deployed on a cutting-edge cluster based on ultra low power energy efficient ARM processors. The efficiency in itself is measured with synthetic HPC benchmarks: HPCC (incorporating the well known HPL), HPCG and real world applications from the bioinformatics domain - GROMACS and ABySS. The experimental evaluation revealed an average 24% performance drop in performance for compute-intensive tasks and 65.6% drop in communication capacity compared to the native environment without the IaaS solution, showing a non-negligible impact on the tested platform. To our knowledge, this is one of the first studies of this type, since deployment attempts of the OpenStack infrastructure on top of ARM platforms are in early stages, and are generally performed only for demonstration purposes. [less ▲]

Detailed reference viewed: 201 (18 UL)
Full Text
Peer Reviewed
See detailExploiting the Hard-wired Vulnerabilities of Newscast via Connectivity-splitting Attack
Muszynski, Jakub UL; Varrette, Sébastien UL; Jimenez Laredo, Juan Luis UL et al

in Proc. of the IEEE Intl. Conf. on Network and System Security (NSS 2014) (2014, October)

Newscast is a model for information dissemination and mem- bership management in large-scale, agent-based distributed systems. It deploys a simple, peer-to-peer data exchange protocol. The Newscast pro ... [more ▼]

Newscast is a model for information dissemination and mem- bership management in large-scale, agent-based distributed systems. It deploys a simple, peer-to-peer data exchange protocol. The Newscast pro- tocol forms an overlay network and keeps it connected by means of an epidemic algorithm, thus featuring a complex, spatially structured, and dynamically changing environment. It has recently become very popu- lar due to its inherent resilience to node volatility as it exhibits strong self-healing properties. In this paper, we analyze the robustness of the Newscast model when executed in a distributed environment subjected to malicious acts. More precisely, we evaluate the resilience of Newscast against cheating faults and demonstrate that even a few naive cheaters are able to defeat the protocol by breaking the network connectivity. Concrete experiments are performed using a framework that implements both the protocol and the cheating model considered in this work. [less ▲]

Detailed reference viewed: 147 (11 UL)
Full Text
Peer Reviewed
See detailAnalysis of the Data Flow in the Newscast Protocol for Possible Vulnerabilities
Muszynski, Jakub UL; Varrette, Sébastien UL; Laredo, J. L. Jiménez et al

in Proc. of Intl. Conf. on Cryptography and Security System (CSS'14) (2014, September)

Detailed reference viewed: 161 (7 UL)
Full Text
Peer Reviewed
See detailHPC Performance and Energy-Efficiency of the OpenStack Cloud Middleware
Varrette, Sébastien UL; Plugaru, Valentin UL; Guzek, Mateusz UL et al

in Proc. of the 43rd Intl. Conf. on Parallel Processing (ICPP-2014), Heterogeneous and Unconventional Cluster Architectures and Applications Workshop (HUCAA'14) (2014, September)

Detailed reference viewed: 349 (40 UL)
Full Text
Peer Reviewed
See detailManagement of an Academic HPC Cluster: The UL Experience
Varrette, Sébastien UL; Bouvry, Pascal UL; Cartiaux, Hyacinthe UL et al

in Proc. of the 2014 Intl. Conf. on High Performance Computing Simulation (HPCS 2014) (2014, July)

The intensive growth of processing power, data storage and transmission capabilities has revolutionized many aspects of science. These resources are essential to achieve high-quality results in many ... [more ▼]

The intensive growth of processing power, data storage and transmission capabilities has revolutionized many aspects of science. These resources are essential to achieve high-quality results in many application areas. In this context, the University of Luxembourg (UL) operates since 2007 an High Performance Computing (HPC) facility and the related storage. The aspect of bridging computing and storage is a requirement of UL service – the reasons are both legal (certain data may not move) and performance related. Nowa- days, people from the three faculties and/or the two Interdisciplinary centers within the UL, are users of this facility. More specifically, key research priorities such as Systems Bio-medicine (by LCSB) and Security, Reliability & Trust (by SnT) require access to such HPC facilities in order to function in an adequate environment. The management of HPC solutions is a complex enterprise and a constant area for discussion and improvement. The UL HPC facility and the derived deployed services is a complex computing system to manage by its scale: at the moment of writing, it consists of 150 servers, 368 nodes (3880 computing cores) and 1996 TB of shared raw storage which are all configured, monitored and operated by three per- sons using advanced IT automation solutions based on Puppet [1], FAI [2] and Capistrano [3]. This paper covers all the aspects in relation to the management of such a complex infrastructure, whether technical or administrative. Most design choices or implemented approaches have been motivated by several years of experience in addressing research needs, mainly in the HPC area but also in complementary services (typically Web-based). In this context, we tried to answer in a flexible and convenient way many technological issues. This experience report may be of interest for other research centers belonging either to the public or the private sector looking for good if not best practices in cluster architecture and management. [less ▲]

Detailed reference viewed: 500 (75 UL)
Full Text
See detailHPC platforms @ UL: Overview (as of 2014) and Usage
Varrette, Sébastien UL; Bouvry, Pascal UL; Georgatos, Fotis et al

Presentation (2014, May)

Detailed reference viewed: 56 (2 UL)
Full Text
Peer Reviewed
See detailComparison of Multi-objective Optimization Algorithms for the JShadObf JavaScript Obfuscator
Bertholon, Benoit UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 17th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2014), part of the 28th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2014) (2014, May)

With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the ... [more ▼]

With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the heart of these services becomes more and more crucial, especially for the companies making business on top of these services. The majority of these services are now using the JavaScript programming language to interact with the user as all modern web browsers – either on desktops, game consoles, tablets or smart phones – include JavaScript interpreters making it the most ubiquitous programming language in history. This context renew the interest of obfuscation techniques, i.e. to render a program "unintelligible" without altering its functionality. The objective is to prevent the reverse-engineering on the program for a certain period of time – an absolute protection by this way being unrealistic since stand-alone obfuscation for arbitrary programs has been proven impossible in 2001. In [11], we have presented JSHADOBF, an obfuscation framework based on evolutionary heuristics designed to optimize for a given input JavaScript program, the sequence of transformations that should be applied to the source code to improve its obfuscation capacity. Measuring this capacity is based on the combination of several metrics optimized simultaneously with Multi-Objective Evolutionary Algorithms (MOEAs). In this paper, we extend and complete the experiments made around JSHADOBF to ana- lyze the impact of the underlying Multi-Objective Evolutionary Algorithms (MOEAs) algorithm onto the obfuscation process. In particular, we compare the performances of NSGA-II and MOEAD (two reference algorithms in the optimization domain) on top of JSHADOBF to first obfuscate a pedagogical program inherited from linear algebra, then one of the most popular and widely used JavaScript library: JQuery. [less ▲]

Detailed reference viewed: 266 (4 UL)
See detailThéorie des Codes : Compression, Cryptage et Correction
Dumas, J.-G.; Roch, J.-L.; Tannier, E. et al

Book published by Dunod - 2nde (2014)

Detailed reference viewed: 129 (4 UL)