References of "Varrette, Sébastien 50003258"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailComparing Broad-Phase Interaction Detection Algorithms for Multiphysics DEM Applications
Rousset, Alban UL; Mainassara Chekaraou, Abdoul Wahid UL; Liao, Yu-Chung UL et al

Scientific Conference (2017, September)

Collision detection is an ongoing source of research and optimization in many fields including video-games and numerical simulations [6, 7, 8]. The goal of collision detection is to report a geometric ... [more ▼]

Collision detection is an ongoing source of research and optimization in many fields including video-games and numerical simulations [6, 7, 8]. The goal of collision detection is to report a geometric contact when it is about to occur or has actually occurred. Unfortunately, detailed and exact collision detection for large amounts of objects represent an immense amount of computations, naivly n 2 operation with n being the number of objects [9]. To avoid and reduce these expensive computations, the collision detection is decomposed in two phases as it shown on Figure 1: the Broad-Phase and the Narrow-Phase. In this paper, we focus on Broad-Phase algorithm in a large dynamic three-dimensional environment. We studied two kinds of Broad-Phase algorithms: spatial partitioning and spatial sorting. Spatial partitioning techniques op- erate by dividing space into a number of regions that can be quickly tested against each object. Two types of spatial partitioning will be considered: grids and trees. The grid-based algorithms consist of a spatial partitioning processing by dividing space into regions and testing if objects overlap the same region of space. And this reduces the number of pairwise to test. The tree-based algorithms use a tree structure where each node spans a particular space area. This reduces the pairwise checking cost because only tree leaves are checked. The spatial sorting based algorithm consists of a sorted spatial ordering of objects. Axis-Aligned Bounding Boxes (AABBs) are projected onto x, y and z axes and put into sorted lists. By sorting projection onto axes, two objects collide if and only if they collide on the three axes. This axis sorting reduces the number of pairwise to tested by reducing the number of tests to perform to only pairs which collide on at least one axis. For this study, ten different Broad-Phase collision detection algorithms or framework have been considered. The Bullet [6], CGAL [10, 11] frameworks have been used. Concerning the implemented algorithms most of them come from papers or given implementation [less ▲]

Detailed reference viewed: 21 (1 UL)
Full Text
Peer Reviewed
See detailAmazon Elastic Compute Cloud (EC2) vs. in-House HPC Platform: a Cost Analysis
Emeras, Joseph UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 9th IEEE Intl. Conf. on Cloud Computing (CLOUD 2016) (2016, June)

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as THE solution to most IT problems. While High Performance Computing (HPC) centers continuously ... [more ▼]

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as THE solution to most IT problems. While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, several voices (most probably commercial ones) emit the wish that CC platforms could also serve HPC needs and eventually replace in-house HPC platforms. If we exclude the pure performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when submitted to an High Performance Computing (HPC) workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by evaluating the Total Cost of Ownership (TCO) of the in- house HPC facility we operate since 2007 within the University of Luxembourg (UL), and compare it with the investment that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is two-fold. First we propose a theoretical price - performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on our own cluster TCO and taking into account all the Operating Expense (OPEX), we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balance the common intuition in favor of Cloud Computing (CC) platforms, would they be provided by the reference Cloud provider worldwide. [less ▲]

Detailed reference viewed: 139 (12 UL)
Full Text
Peer Reviewed
See detailReducing Efficiency of Connectivity-Splitting Attack on Newscast via Limited Gossip
Muszynski, Jakub UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 19th European Event on Bio-Inspired Computation, EvoCOMNET 2016 (2016, March)

Newscast is aPeer-to-Peer, nature-inspired gossip-based data exchange protocol used for information dissemination and membership management in large-scale, agent-based distributed systems. The model ... [more ▼]

Newscast is aPeer-to-Peer, nature-inspired gossip-based data exchange protocol used for information dissemination and membership management in large-scale, agent-based distributed systems. The model follows a probabilistic scheme able to keep a self-organised, small-world equilibrium featuring a complex, spatially structured and dynamically changing environment. Newscast gained popularity since the early 2000s thanks to its inherent resilience to node volatility as the protocol exhibits strong self-healing properties. However, the original design proved to be surprisingly fragile in a byzantine environment subjected to cheating faults. Indeed, a set of recent studies emphasized the hard-wired vulnerabilities of the protocol, leading to an efficient implementation of a malicious client, where a few naive cheaters are able to break the network connectivity in a very short time. Extending these previous works, we propose in this paper a modification of the seminal protocol with embedded counter-measures, improving the resilience of the scheme against malicious acts without significantly affecting the original Newscast’s proper- ties nor its inherent performance. Concrete experiments were performed to support these claims, using a framework implementing all the solutions discussed in this work. [less ▲]

Detailed reference viewed: 35 (1 UL)
Full Text
Peer Reviewed
See detailHPC or the Cloud: a cost study over an XDEM Simulation
Emeras, Joseph; Besseron, Xavier UL; Varrette, Sébastien UL et al

in Proc. of the 7th International Supercomputing Conference in Mexico (ISUM 2016) (2016)

Detailed reference viewed: 63 (8 UL)
Full Text
Peer Reviewed
See detailAn LLVM-based Approach to Generate Energy Aware Code by means of MOEAs
Varrette, Sébastien UL; Dorronsorro, Bernabe; Bouvry, Pascal UL

in Proc. of the 7th European Symposium on Computational Intelligence and Mathematics.(ESCIM 2015) (2015, October)

Moderating the energy consumption and building eco-friendly computing infrastructure is of major concerns in the implementation of High Performance Computing (HPC) system, especially when a world- wide ... [more ▼]

Moderating the energy consumption and building eco-friendly computing infrastructure is of major concerns in the implementation of High Performance Computing (HPC) system, especially when a world- wide effort target the production of an Exaflop machine by 2020 within a power envelop of 20 MW. Tracking energy savings can be done at var- ious levels and in this paper, we investigate the automatic generation of energy aware software with the ambition to keep the same level of efficiency, testability, scalability and security. To this end, the Evo-LLVM framework is proposed. Based on the mod- ular LLVM Compiler Infrastructure and exploiting various evolutionary heuristics, our scheme is designed to optimize for a given input source code (written in C) the sequence of LLVM transformations that should be applied to the source code to improve its energy efficiency without degrading its other performance attributes (execution time, parallel or distributed scalability). Measuring this capacity is based on the combi- nation of several metrics optimized simultaneously with Multi-Objective Evolutionary Algorithms (MOEAs). In this position paper, the NSGA- II algorithm is implemented within the Evo-LLVM yet the analysis of more advanced heuristics is in progress. In all cases, the experimental validation of the framework over a pedagogical code sample reveal a drastic improvement of the energy consumed during the execution while maintaining (or even improving) the average execution time. [less ▲]

Detailed reference viewed: 63 (13 UL)
Full Text
Peer Reviewed
See detailDistributed Cellular Evolutionary Algorithms in a Byzantine Environment
Muszynski, Jakub UL; Varrette, Sébastien UL; Dorronsorro, Bernabé et al

in Proc. of the 18th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2015), part of the 29th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2015) (2015, May)

Distributed parallel computing platforms contribute for a large part to some of the most powerful computers. Such architec- tures are typically based on accelerators (General Purpose com- puting on ... [more ▼]

Distributed parallel computing platforms contribute for a large part to some of the most powerful computers. Such architec- tures are typically based on accelerators (General Purpose com- puting on Graphics Processing Units, Many Integrated Cores e.g Xeon Phi co-processors) and/or a large number of interconnected computing nodes. Obviously, they raise new challenges, typically in terms of scalability, robustness, adaptability and security. At the advent of the quest for Ultrascale Computing Systems, this paper addresses the issue of fault tolerance toward Byzantine failures overs such platforms. Indeed, the inherent unpredictable nature of these errors render their detection, not speaking of their correction, hard or even impossible to perform at large-scale. At this level, Algorithm-Based Fault Tolerance (ABFT) techniques where the fault tolerance scheme is tailored to the algorithm performed, seems the most promising approaches to deal with such failures. In this context, Evolutionary Algorithms (EAs), especially panmictic global parallel EAs, exhibit a remarkable resilience against byzantine failures modeled as cheating faults as demonstrated either empirically or theoretically in previous studies [1], [2]. In this paper, we extend this analysis to the case of distributed EAs based on the cellular model leading to distributed Cellular Evolutionary Algorithms (dCEAs). Our empirical study over a set or reference optimization problem confirm the ABFT nature of dCEAs. To our knowledge, this is the first study of dCEAs under the perspective of cheating issues and crash faults in a domain of distributed computations, thus opening new insights and perspectives for the design of competitive ultra-scale system based on evolutionary programming models. [less ▲]

Detailed reference viewed: 25 (2 UL)
Full Text
Peer Reviewed
See detailEvalix: Classification and Prediction of Job Resource Consumption on HPC Platforms
Emeras, Joseph UL; Varrette, Sébastien UL; Guzek, Mateusz UL et al

in Proc. of the 19th Intl. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP'15), part of the 29th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2015) (2015, May)

At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle ... [more ▼]

At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle unveils the job prediction component of the Evalix project. This framework aims at an improved efficiency of the underlying Resource and Job Management System (RJMS) within heterogeneous HPC facil- ities by the automatic evaluation and characterization of the submitted workload. The objective is not only to better adapt the scheduled jobs to the available resource capabilities, but also to reduce the energy costs. For that purpose, we collected the resource consumption of all the jobs executed on a production cluster for a period of three months. Based on the analysis then on the classification of the jobs, we computed a resource consumption model. The objective is to train a set of predictors based on the aforementioned model, that will give the estimated CPU, mem- ory and IO used by the jobs. The analysis of the resource consumption highlighted that different classes of jobs have different kinds of resource needs and the classification of the jobs enabled to characterize several application patterns of the users. We also discovered that several users whose resource usage on the cluster is considered as too low, are respon- sible for a loss of CPU time on the order of five years over the considered three month period. The predictors, trained from a supervised learning algorithm, were able to correctly classify a large set of data. We evalu- ated them with three performance indicators that gave an information retrieval rate of 71% to 89% and a probability of accurate prediction be- tween 0.7 and 0.8. The results of this work will be particularly helpful for designing an optimal partitioning of the considered heterogeneous plat- form, taking into consideration the real application needs and thus lead- ing to energy savings and performance improvements. Moreover, apart from the novelty of the contribution, the accurate classification scheme offers new insights of users behavior of interest for the design of future HPC platforms. [less ▲]

Detailed reference viewed: 82 (3 UL)
Peer Reviewed
See detailEnergy Efficiency and High-Performance Computing
Bouvry, Pascal UL; Chetsa, G. L. T.; Costa, G. Da et al

in Pierson, J.-M. (Ed.) Large-scale Distributed Systems and Energy Efficiency: A Holistic View (2015)

Detailed reference viewed: 39 (1 UL)
Full Text
Peer Reviewed
See detailPerformance Evaluation of the XDEM framework on the OpenStack Cloud Computing Middleware
Besseron, Xavier UL; Plugaru, Valentin UL; Mahmoudi, Amir Houshang UL et al

in Proceedings of the Fourth International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering (2015, February)

As Cloud Computing services become ever more prominent, it appears necessary to assess the efficiency of these solutions. This paper presents a performance evaluation of the OpenStack Cloud Computing ... [more ▼]

As Cloud Computing services become ever more prominent, it appears necessary to assess the efficiency of these solutions. This paper presents a performance evaluation of the OpenStack Cloud Computing middleware using our XDEM application simulating the pyrolysis of biomass as a benchmark. We propose a systematic study based on a fully automated benchmarking framework to evaluate 3 different configurations: Native (i.e. no virtualization), OpenStack with KVM and XEN hypervisors. Our approach features the following advantages: real user application, the fair comparison using the same hardware, the large scale distributed execution, while fully automated and reproducible. Experiments has been run on two different clusters, using up to 432 cores. Results show a moderate overhead for sequential execution and a significant penalty for distributed execution under the Cloud middleware. The overhead on multiple nodes is between 10% and 30% for OpenStack/KVM and 30% and 60% for OpenStack/XEN. [less ▲]

Detailed reference viewed: 231 (48 UL)
Peer Reviewed
See detailEnergy efficiency in HPC Data Centers: Latest Advances to Build the Path to Exascale
Varrette, Sébastien UL; Bouvry, Pascal UL; Jarus, M. et al

in Handbook on Data Centers (2015)

Detailed reference viewed: 38 (1 UL)
See detailFoundations of Coding: Compression, Encryption, Error-Correction
Dumas, J.-G.; Roch, J.-L.; Tannier, E. et al

Book published by Wiley Sons (2015)

This book offers a comprehensive introduction to the fundamental structures and applications of a wide range of contemporary coding operations This book offers a comprehensive introduction to the ... [more ▼]

This book offers a comprehensive introduction to the fundamental structures and applications of a wide range of contemporary coding operations This book offers a comprehensive introduction to the fundamental structures and applications of a wide range of contemporary coding operations. This text focuses on the ways to structure information so that its transmission will be in the safest, quickest, and most efficient and error-free manner possible. All coding operations are covered in a single framework, with initial chapters addressing early mathematical models and algorithmic developments which led to the structure of code. After discussing the general foundations of code, chapters proceed to cover individual topics such as notions of compression, cryptography, detection, and correction codes. Both classical coding theories and the most cutting-edge models are addressed, along with helpful exercises of varying complexities to enhance comprehension. Explains how to structure coding information so that its transmission is safe, error-free, efficient, and fast Includes a pseudo-code that readers may implement in their preferential programming language Features descriptive diagrams and illustrations, and almost 150 exercises, with corrections, of varying complexity to enhance comprehension Foundations of Coding: Compression, Encryption, Error-Correction is an invaluable resource for understanding the various ways information is structured for its secure and reliable transmission in the 21st-century world. [less ▲]

Detailed reference viewed: 18 (0 UL)
Full Text
Peer Reviewed
See detailResilience within Ultrascale Computing System: Challenges and Opportunities from Nesus Project
Bouvry, Pascal UL; Mayer, R.; Muszynski, Jakub UL et al

in Supercomputing Frontiers and Innovations (2015), 2(2), 46--63

Ultrascale computing is a new computing paradigm that comes naturally from the necessity of computing systems that should be able to handle massive data in possibly very large scale distributed systems ... [more ▼]

Ultrascale computing is a new computing paradigm that comes naturally from the necessity of computing systems that should be able to handle massive data in possibly very large scale distributed systems, enabling new forms of applications that can serve a very large amount of users and in a timely manner that we have never experienced before. However, besides the benefits, ultrascale computing systems do not come without challenges. One of the challenges is the resilience of ultrascale computing systems. Although resilience is already an established field in system science and many methodologies and approaches are available to deal with it, the unprecedented scales of computing, of the massive data to be managed, new network technologies, and drastically new forms of massive scale applications bring new challenges that need to be addressed. This paper reviews the challenges and approaches of resilience in ultrascale computing systems from multiple perspectives involving and addressing the resilience aspects of hardware-software co-design for ultrascale systems, resilience against (security) attacks, new approaches and methodologies to resilience in ultrascale systems, applications and case studies. [less ▲]

Detailed reference viewed: 29 (2 UL)
Full Text
Peer Reviewed
See detailPerformance Analysis of Cloud Environments on Top of Energy-Efficient Platforms Featuring Low Power Processors
Plugaru, Valentin UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 6th IEEE Intl. Conf. on Cloud Computing Technology and Science (CloudCom'14) (2014, December)

Energy efficiency remains a prevalent concern in the development of future HPC systems. Thus the next generations of supercomputers are foreseen to be developed as hybrid systems featuring traditional ... [more ▼]

Energy efficiency remains a prevalent concern in the development of future HPC systems. Thus the next generations of supercomputers are foreseen to be developed as hybrid systems featuring traditional processors, accelerators (such as GPGPUs) and/or low-power processor architectures (ARM, Intel Atom, etc.) primarily designed for the mobile and embedded de- vices market. Also, a confluence with the Cloud Computing (CC) paradigm is anticipated, driven by economic sustainability factors. However, the performance impact of running Cloud middleware on such crossbred platforms remains to be explored, especially on low power processors. In this context, this paper brings two main contributions: (1) the design and implementation of BACH, a framework able to execute automated performance evaluations of Cloud and HPC cluster environments; (2) the concrete validation of the framework for the evaluation of the modern OpenStack Infrastructure-as-a-Service (IaaS) middleware, deployed on a cutting-edge cluster based on ultra low power energy efficient ARM processors. The efficiency in itself is measured with synthetic HPC benchmarks: HPCC (incorporating the well known HPL), HPCG and real world applications from the bioinformatics domain - GROMACS and ABySS. The experimental evaluation revealed an average 24% performance drop in performance for compute-intensive tasks and 65.6% drop in communication capacity compared to the native environment without the IaaS solution, showing a non-negligible impact on the tested platform. To our knowledge, this is one of the first studies of this type, since deployment attempts of the OpenStack infrastructure on top of ARM platforms are in early stages, and are generally performed only for demonstration purposes. [less ▲]

Detailed reference viewed: 87 (13 UL)
Full Text
Peer Reviewed
See detailParaMASK: a Multi-Agent System for the Efficient and Dynamic Adaptation of HPC Workloads
Guzek, Mateusz UL; Besseron, Xavier UL; Varrette, Sébastien UL et al

in 14th IEEE International Symposium on Signal Processing and Information Technology (ISSPIT 2014) (2014, December)

Detailed reference viewed: 73 (19 UL)
Full Text
Peer Reviewed
See detailExploiting the Hard-wired Vulnerabilities of Newscast via Connectivity-splitting Attack
Muszynski, Jakub UL; Varrette, Sébastien UL; Jimenez Laredo, Juan Luis UL et al

in Proc. of the IEEE Intl. Conf. on Network and System Security (NSS 2014) (2014, October)

Newscast is a model for information dissemination and mem- bership management in large-scale, agent-based distributed systems. It deploys a simple, peer-to-peer data exchange protocol. The Newscast pro ... [more ▼]

Newscast is a model for information dissemination and mem- bership management in large-scale, agent-based distributed systems. It deploys a simple, peer-to-peer data exchange protocol. The Newscast pro- tocol forms an overlay network and keeps it connected by means of an epidemic algorithm, thus featuring a complex, spatially structured, and dynamically changing environment. It has recently become very popu- lar due to its inherent resilience to node volatility as it exhibits strong self-healing properties. In this paper, we analyze the robustness of the Newscast model when executed in a distributed environment subjected to malicious acts. More precisely, we evaluate the resilience of Newscast against cheating faults and demonstrate that even a few naive cheaters are able to defeat the protocol by breaking the network connectivity. Concrete experiments are performed using a framework that implements both the protocol and the cheating model considered in this work. [less ▲]

Detailed reference viewed: 44 (10 UL)
Full Text
Peer Reviewed
See detailAnalysis of the Data Flow in the Newscast Protocol for Possible Vulnerabilities
Muszynski, Jakub UL; Varrette, Sébastien UL; Laredo, J. L. Jiménez et al

in Proc. of Intl. Conf. on Cryptography and Security System (CSS'14) (2014, September)

Detailed reference viewed: 41 (7 UL)
Full Text
Peer Reviewed
See detailHPC Performance and Energy-Efficiency of the OpenStack Cloud Middleware
Varrette, Sébastien UL; Plugaru, Valentin UL; Guzek, Mateusz UL et al

in Proc. of the 43rd Intl. Conf. on Parallel Processing (ICPP-2014), Heterogeneous and Unconventional Cluster Architectures and Applications Workshop (HUCAA'14) (2014, September)

Detailed reference viewed: 196 (35 UL)
Full Text
Peer Reviewed
See detailManagement of an Academic HPC Cluster: The UL Experience
Varrette, Sébastien UL; Bouvry, Pascal UL; Cartiaux, Hyacinthe UL et al

in Proc. of the 2014 Intl. Conf. on High Performance Computing Simulation (HPCS 2014) (2014, July)

The intensive growth of processing power, data storage and transmission capabilities has revolutionized many aspects of science. These resources are essential to achieve high-quality results in many ... [more ▼]

The intensive growth of processing power, data storage and transmission capabilities has revolutionized many aspects of science. These resources are essential to achieve high-quality results in many application areas. In this context, the University of Luxembourg (UL) operates since 2007 an High Performance Computing (HPC) facility and the related storage. The aspect of bridging computing and storage is a requirement of UL service – the reasons are both legal (certain data may not move) and performance related. Nowa- days, people from the three faculties and/or the two Interdisciplinary centers within the UL, are users of this facility. More specifically, key research priorities such as Systems Bio-medicine (by LCSB) and Security, Reliability & Trust (by SnT) require access to such HPC facilities in order to function in an adequate environment. The management of HPC solutions is a complex enterprise and a constant area for discussion and improvement. The UL HPC facility and the derived deployed services is a complex computing system to manage by its scale: at the moment of writing, it consists of 150 servers, 368 nodes (3880 computing cores) and 1996 TB of shared raw storage which are all configured, monitored and operated by three per- sons using advanced IT automation solutions based on Puppet [1], FAI [2] and Capistrano [3]. This paper covers all the aspects in relation to the management of such a complex infrastructure, whether technical or administrative. Most design choices or implemented approaches have been motivated by several years of experience in addressing research needs, mainly in the HPC area but also in complementary services (typically Web-based). In this context, we tried to answer in a flexible and convenient way many technological issues. This experience report may be of interest for other research centers belonging either to the public or the private sector looking for good if not best practices in cluster architecture and management. [less ▲]

Detailed reference viewed: 159 (55 UL)
Full Text
Peer Reviewed
See detailComparison of Multi-objective Optimization Algorithms for the JShadObf JavaScript Obfuscator
Bertholon, Benoit UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 17th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2014), part of the 28th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2014) (2014, May)

With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the ... [more ▼]

With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the heart of these services becomes more and more crucial, especially for the companies making business on top of these services. The majority of these services are now using the JavaScript programming language to interact with the user as all modern web browsers – either on desktops, game consoles, tablets or smart phones – include JavaScript interpreters making it the most ubiquitous programming language in history. This context renew the interest of obfuscation techniques, i.e. to render a program "unintelligible" without altering its functionality. The objective is to prevent the reverse-engineering on the program for a certain period of time – an absolute protection by this way being unrealistic since stand-alone obfuscation for arbitrary programs has been proven impossible in 2001. In [11], we have presented JSHADOBF, an obfuscation framework based on evolutionary heuristics designed to optimize for a given input JavaScript program, the sequence of transformations that should be applied to the source code to improve its obfuscation capacity. Measuring this capacity is based on the combination of several metrics optimized simultaneously with Multi-Objective Evolutionary Algorithms (MOEAs). In this paper, we extend and complete the experiments made around JSHADOBF to ana- lyze the impact of the underlying Multi-Objective Evolutionary Algorithms (MOEAs) algorithm onto the obfuscation process. In particular, we compare the performances of NSGA-II and MOEAD (two reference algorithms in the optimization domain) on top of JSHADOBF to first obfuscate a pedagogical program inherited from linear algebra, then one of the most popular and widely used JavaScript library: JQuery. [less ▲]

Detailed reference viewed: 114 (4 UL)