References of "Bouvry, Pascal 50001021"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailComparison of Multi-objective Optimization Algorithms for the JShadObf JavaScript Obfuscator
Bertholon, Benoit UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the 17th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2014), part of the 28th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2014) (2014, May)

With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the ... [more ▼]

With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the heart of these services becomes more and more crucial, especially for the companies making business on top of these services. The majority of these services are now using the JavaScript programming language to interact with the user as all modern web browsers – either on desktops, game consoles, tablets or smart phones – include JavaScript interpreters making it the most ubiquitous programming language in history. This context renew the interest of obfuscation techniques, i.e. to render a program "unintelligible" without altering its functionality. The objective is to prevent the reverse-engineering on the program for a certain period of time – an absolute protection by this way being unrealistic since stand-alone obfuscation for arbitrary programs has been proven impossible in 2001. In [11], we have presented JSHADOBF, an obfuscation framework based on evolutionary heuristics designed to optimize for a given input JavaScript program, the sequence of transformations that should be applied to the source code to improve its obfuscation capacity. Measuring this capacity is based on the combination of several metrics optimized simultaneously with Multi-Objective Evolutionary Algorithms (MOEAs). In this paper, we extend and complete the experiments made around JSHADOBF to ana- lyze the impact of the underlying Multi-Objective Evolutionary Algorithms (MOEAs) algorithm onto the obfuscation process. In particular, we compare the performances of NSGA-II and MOEAD (two reference algorithms in the optimization domain) on top of JSHADOBF to first obfuscate a pedagogical program inherited from linear algebra, then one of the most popular and widely used JavaScript library: JQuery. [less ▲]

Detailed reference viewed: 238 (4 UL)
Full Text
Peer Reviewed
See detailEmerging Paradigms and Areas for Expansion
Bouvry, Pascal UL

in IEEE Cloud Computing (2014), 1(1),

IEEE Cloud Computing encourages the academic and research communities to provide exciting articles on emerging cloud and adjacent technology trends and their impact on the perception and use of the cloud ... [more ▼]

IEEE Cloud Computing encourages the academic and research communities to provide exciting articles on emerging cloud and adjacent technology trends and their impact on the perception and use of the cloud in the short, medium, and long term. [less ▲]

Detailed reference viewed: 89 (0 UL)
Full Text
See detailHPC platforms @ UL: Overview (as of 2014) and Usage
Varrette, Sébastien UL; Bouvry, Pascal UL; Georgatos, Fotis et al

Presentation (2014, May)

Detailed reference viewed: 34 (1 UL)
Full Text
Peer Reviewed
See detailHybridisation Schemes for Communication Satellite Payload Configuration Optimisation
Stathakis, Apostolos; Danoy, Grégoire UL; Talbi, El-Ghazali et al

in 17th European Conference on Applications of Evolutionary Computation (EvoApplications 2014) (2014, April)

Detailed reference viewed: 165 (14 UL)
Full Text
See detailEvaluating the HPC Performance and Energy-Efficiency of Intel and ARM-based systems with synthetic and bioinformatics workloads
Plugaru, Valentin UL; Varrette, Sébastien UL; Pinel, Frédéric UL et al

Report (2014)

The increasing demand for High Performance Computing (HPC) paired with the higher power requirements of the ever-faster systems has led to the search for both performant and more energy-efficient ... [more ▼]

The increasing demand for High Performance Computing (HPC) paired with the higher power requirements of the ever-faster systems has led to the search for both performant and more energy-efficient architectures. This article compares and contrasts the performance and energy efficiency of two modern, traditional Intel Xeon and low power ARM-based clusters, which are tested with the recently developed High Performance Conjugate Gradient (HPCG) benchmark and the ABySS, FASTA and MrBayes bioinformatics applications. We show a higher Performance per Watt valuation of the ARM cluster, and lower energy usage during the tests, which does not offset the much faster job completion rate obtained by the Intel cluster, making the latter more suitable for the considered workloads given the disparity in the performance results. [less ▲]

Detailed reference viewed: 158 (23 UL)
Full Text
Peer Reviewed
See detailDesigning Robust Volunteer-based Evolutionary Algorithms
Jimenez Laredo, Juan Luis UL; Bouvry, Pascal UL; Lombraña Gonzalez, Daniel et al

in Genetic Programming and Evolvable Machines (2014)

This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the ... [more ▼]

This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the world. Given that resources may join the system only for a limited period of time, the challenge of a volunteer-based evolutionary algorithm is to take advantage of a large amount of computational power that in turn is volatile. The paper analyzes first the speed of convergence of massively parallel evolutionary algorithms. Then, it provides some guidance about how to design efficient policies to overcome the algorithmic loss of quality when the system undergoes high rates of transient failures, i.e. computers fail only for a limited period of time and then become available again. In order to provide empirical evidence, experiments were conducted for two well-known problems which require large population sizes to be solved, the first based on a genetic algorithm and the second on genetic programming. Results show that, in general, evolutionary algorithms undergo a graceful degradation under the stress of losing computing nodes. Additionally, new available nodes can also contribute to improving the search process. Despite losing up to 90% of the initial computing resources, volunteer-based evolutionary algorithms can find the same solutions in a failure-prone as in a failure-free run. [less ▲]

Detailed reference viewed: 139 (5 UL)
Full Text
Peer Reviewed
See detailSolving the three dimensional quadratic assignment problem on a computational grid
Mezmaz, Mohand; Mehdi, Malika UL; Bouvry, Pascal UL et al

in Cluster Computing (2014), 17(2), 205-217

The exact resolution of large instances of combinatorial optimization problems, such as three dimensional quadratic assignment problem (Q3AP), is a real challenge for grid computing. Indeed, it is ... [more ▼]

The exact resolution of large instances of combinatorial optimization problems, such as three dimensional quadratic assignment problem (Q3AP), is a real challenge for grid computing. Indeed, it is necessary to reconsider the resolution algorithms and take into account the characteristics of such environments, especially large scale and dynamic availability of resources, and their multi-domain administration. In this paper, we revisit the design and implementation of the branch and bound algorithm for solving large combinatorial optimization problems such as Q3AP on the computational grids. Such gridification is based on new ways to effi- ciently deal with some crucial issues, mainly dynamic adaptive load balancing and fault tolerance. Our new approach allowed the exact resolution on a nation-wide grid of a dif- ficult Q3AP instance. To solve this instance, an average of 1,123 computing cores were used for less than 12 days with a peak of around 3,427 computing cores. [less ▲]

Detailed reference viewed: 90 (0 UL)
Full Text
Peer Reviewed
See detailThe sandpile scheduler: How self-organized criticality may lead to dynamic load-balancing
Jimenez Laredo, Juan Luis UL; Bouvry, Pascal UL; Guinand, Frederic et al

in Cluster Computing (2014)

This paper studies a self-organized criticality model called sandpile for dynamically load-balancing tasks arriving in the form of Bag-of-Tasks in large-scale decentralized system. The sandpile is ... [more ▼]

This paper studies a self-organized criticality model called sandpile for dynamically load-balancing tasks arriving in the form of Bag-of-Tasks in large-scale decentralized system. The sandpile is designed as a decentralized agent system characterizing a cellular automaton, which works in a critical state at the edge of chaos. Depending on the state of the cellular automaton, different responses may occur when a new task is assigned to a resource: it may change nothing or generate avalanches that reconfigure the state of the system. The abundance of such avalanches is in power-law relation with their sizes, a scale-invariant behavior that emerges without requiring tuning or control parameters. That means that large—catastrophic—avalanches are very rare but small ones occur very often. Such emergent pattern can be efficiently adapted for non-clairvoyant scheduling, where tasks are load balanced in computing resources trying to maximize the performance but without assuming any knowledge on the tasks features. The algorithm design is experimentally validated showing that the sandpile is able to find near-optimal schedules by reacting differently to different conditions of workloads and architectures. [less ▲]

Detailed reference viewed: 181 (3 UL)
Full Text
Peer Reviewed
See detailSpecial issue: Energy-efficiency in large distributed computing architectures
Dorronsoro, Bernabé; Danoy, Grégoire UL; Bouvry, Pascal UL

in Future Generation Comp. Syst. (2014), 36

Detailed reference viewed: 162 (9 UL)
Full Text
Peer Reviewed
See detailEnergy Savings on a Cloud-based Opportunistic Infrastructure
Pecero, Johnatan UL; Diaz, Cesar UL; Castro, Harold et al

in Service Oriented Computing ICSOC 2013 Workshops (2014)

In this paper, we address energy savings on a Cloud-based opportunistic infrastructure. The infrastructure implements opportunis- tic design concepts to provide basic services, such as virtual CPUs, RAM ... [more ▼]

In this paper, we address energy savings on a Cloud-based opportunistic infrastructure. The infrastructure implements opportunis- tic design concepts to provide basic services, such as virtual CPUs, RAM and Disk while profiting from unused capabilities of desktop computer laboratories in a non-intrusive way. We consider the problem of virtual machines consolidation on the oppor- tunistic cloud computing resources. We investigate four workload packing algorithms that place a set of virtual machines on the least number of physical machines to increase resource utilization and to transition parts of the unused resources into a lower power states or switching off. We em- pirically evaluate these heuristics on real workload traces collected from our experimental opportunistic cloud, called UnaCloud. The final aim is to implement the best strategy on UnaCoud. The results show that a consolidation algorithm implementing a policy taking into account fea- tures and constraints of the opportunistic cloud saves energy more than 40% than related consolidation heuristics, over the percentage earned by the opportunistic environment. [less ▲]

Detailed reference viewed: 134 (2 UL)
Full Text
Peer Reviewed
See detailPerformance Evaluation of an IaaS Opportunistic Cloud Computing
Diaz, Cesar UL; Pecero, Johnatan UL; Bouvry, Pascal UL et al

Poster (2014)

This poster shows the performance evaluation of UnaCloud Opportunistic Computing IaaS. We analyze from an HPC perspective, two virtualization frameworks Virtual Box and VMware ESXi and compare them over ... [more ▼]

This poster shows the performance evaluation of UnaCloud Opportunistic Computing IaaS. We analyze from an HPC perspective, two virtualization frameworks Virtual Box and VMware ESXi and compare them over this particular opportunistic cloud environment. The benchmarks consist of two set of tests, High Performance Linpack and IOzone, that examine the performance and the Input/Output response. The purpose of the experiments is to evaluate the behavior of the different virtual environments over an opportunistic cloud environment and investigate how these are affected by different percentage of end-users. The results show a better performance for Virtual Box than VMware and the other way around for I/O response. Nevertheless, the experiments shows that VBox have more robustness than VMware. [less ▲]

Detailed reference viewed: 153 (5 UL)
See detailEvolutionary Algorithms for Mobile Ad Hoc Networks
Dorronsoro, Bernabé UL; Ruiz, Patricia UL; Danoy, Grégoire UL et al

Book published by John Wiley & Sons (2014)

Detailed reference viewed: 203 (14 UL)
Full Text
Peer Reviewed
See detailAdaptive Energy Efficient Distributed VoIP Load Balancing in Federated Cloud Infrastructure
Tchernykh, Andrei; Cortés-Mendoza, Jorge M.; Pecero, Johnatan E. et al

in IEEE International Conference on Cloud Networking (CLOUDNET), Luxembourg City, 2014. (2014)

Detailed reference viewed: 119 (5 UL)
Full Text
Peer Reviewed
See detailCellular Automata Approach to Maximum Lifetime Coverage Problem in Wireless Sensor Networks
Tretyakova, Antonina; Seredynski, Franciszek; Bouvry, Pascal UL

in Was, Jaroslaw; Sirakoulis, Georgios; Bandini, Stefania (Eds.) Cellular Automata: 11th International Conference on Cellular Automata for Research and Industry, ACRI 2014 Krakow, Poland, September 22–25, 2014, Proceedings (2014)

In this paper, we propose a novel distributed algorithm based on Graph Cellular Automata (GCA) concept to solve Maximum Lifetime Coverage Problem (MLCP) in Wireless Sensor Networks (WSNs). The proposed ... [more ▼]

In this paper, we propose a novel distributed algorithm based on Graph Cellular Automata (GCA) concept to solve Maximum Lifetime Coverage Problem (MLCP) in Wireless Sensor Networks (WSNs). The proposed algorithm possesses all advantages of localized algorithm, i.e. using only some knowledge about the neighbors, WSN is able to self-organize in such a way to prolong its lifetime preserving at the same time required coverage ratio of a target field. The paper presents results of experimental study of the proposed algorithm and comparison of them with a centralized genetic algorithm. [less ▲]

Detailed reference viewed: 101 (0 UL)
Full Text
Peer Reviewed
See detailCooperative Selection: Improving Tournament Selection via Altruism
Jimenez Laredo, Juan Luis UL; Nielsen, Sune Steinbjorn UL; Danoy, Grégoire UL et al

in The 14th European Conference on Evolutionary Computation in Combinatorial Optimisation (2014)

This paper analyzes the dynamics of a new selection scheme based on altruistic cooperation between individuals. The scheme, which we refer to as cooperative selection, extends from tournament selection ... [more ▼]

This paper analyzes the dynamics of a new selection scheme based on altruistic cooperation between individuals. The scheme, which we refer to as cooperative selection, extends from tournament selection and imposes a stringent restriction on the mating chances of an individual during its lifespan: winning a tournament entails a depreciation of its fitness value. We show that altruism minimizes the loss of genetic diversity while increasing the selection frequency of the fittest individuals. An additional contribution of this paper is the formulation of a new combinatorial problem for maximizing the similarity of proteins based on their secondary structure. We conduct experiments on this problem in order to validate cooperative selection. The new selection scheme outperforms tournament selection for any setting of the parameters and is the best trade-off, maximizing genetic diversity and minimizing computational efforts. [less ▲]

Detailed reference viewed: 188 (29 UL)