![]() Guzek, Mateusz ![]() ![]() ![]() in Applied Soft Computing (2014), 24 The ongoing increase of energy consumption by IT infrastructures forces data center managers to find innovative ways to improve energy efficiency. The latter is also a focal point for different branches ... [more ▼] The ongoing increase of energy consumption by IT infrastructures forces data center managers to find innovative ways to improve energy efficiency. The latter is also a focal point for different branches of computer science due to its financial, ecological, political, and technical consequences. One of the answers is given by scheduling combined with dynamic voltage scaling technique to optimize the energy consumption. The way of reasoning is based on the link between current semiconductor technologies and energy state management of processors, where sacrificing the performance can save energy. This paper is devoted to investigate and solve the multi-objective precedence constrained application scheduling problem on a distributed computing system, and it has two main aims: the creation of general algorithms to solve the problem and the examination of the problem by means of the thorough analysis of the results returned by the algorithms. The first aim was achieved in two steps: adaptation of state-of-the-art multi-objective evolutionary algorithms by designing new operators and their validation in terms of performance and energy. The second aim was accomplished by performing an extensive number of algorithms executions on a large and diverse benchmark and the further analysis of performance among the proposed algorithms. Finally, the study proves the validity of the proposed method, points out the best-compared multi-objective algorithm schema, and the most important factors for the algorithms performance. [less ▲] Detailed reference viewed: 215 (15 UL)![]() Kieffer, Emmanuel ![]() ![]() ![]() in International Conference on Metaheuristics and Nature Inspired Computing (META 2014) (2014, October) Detailed reference viewed: 161 (26 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Scientific Conference (2014, October) Detailed reference viewed: 128 (14 UL)![]() Muszynski, Jakub ![]() ![]() ![]() in Proc. of the IEEE Intl. Conf. on Network and System Security (NSS 2014) (2014, October) Newscast is a model for information dissemination and mem- bership management in large-scale, agent-based distributed systems. It deploys a simple, peer-to-peer data exchange protocol. The Newscast pro ... [more ▼] Newscast is a model for information dissemination and mem- bership management in large-scale, agent-based distributed systems. It deploys a simple, peer-to-peer data exchange protocol. The Newscast pro- tocol forms an overlay network and keeps it connected by means of an epidemic algorithm, thus featuring a complex, spatially structured, and dynamically changing environment. It has recently become very popu- lar due to its inherent resilience to node volatility as it exhibits strong self-healing properties. In this paper, we analyze the robustness of the Newscast model when executed in a distributed environment subjected to malicious acts. More precisely, we evaluate the resilience of Newscast against cheating faults and demonstrate that even a few naive cheaters are able to defeat the protocol by breaking the network connectivity. Concrete experiments are performed using a framework that implements both the protocol and the cheating model considered in this work. [less ▲] Detailed reference viewed: 152 (11 UL)![]() Varrette, Sébastien ![]() ![]() ![]() in Proc. of the 43rd Intl. Conf. on Parallel Processing (ICPP-2014), Heterogeneous and Unconventional Cluster Architectures and Applications Workshop (HUCAA'14) (2014, September) Detailed reference viewed: 354 (40 UL)![]() Muszynski, Jakub ![]() ![]() in Proc. of Intl. Conf. on Cryptography and Security System (CSS'14) (2014, September) Detailed reference viewed: 165 (7 UL)![]() Schleich, Julien ![]() ![]() ![]() in Applied Soft Computing (2014), 21(0), 637646 Detailed reference viewed: 211 (12 UL)![]() Varrette, Sébastien ![]() ![]() ![]() in Proc. of the 2014 Intl. Conf. on High Performance Computing Simulation (HPCS 2014) (2014, July) The intensive growth of processing power, data storage and transmission capabilities has revolutionized many aspects of science. These resources are essential to achieve high-quality results in many ... [more ▼] The intensive growth of processing power, data storage and transmission capabilities has revolutionized many aspects of science. These resources are essential to achieve high-quality results in many application areas. In this context, the University of Luxembourg (UL) operates since 2007 an High Performance Computing (HPC) facility and the related storage. The aspect of bridging computing and storage is a requirement of UL service – the reasons are both legal (certain data may not move) and performance related. Nowa- days, people from the three faculties and/or the two Interdisciplinary centers within the UL, are users of this facility. More specifically, key research priorities such as Systems Bio-medicine (by LCSB) and Security, Reliability & Trust (by SnT) require access to such HPC facilities in order to function in an adequate environment. The management of HPC solutions is a complex enterprise and a constant area for discussion and improvement. The UL HPC facility and the derived deployed services is a complex computing system to manage by its scale: at the moment of writing, it consists of 150 servers, 368 nodes (3880 computing cores) and 1996 TB of shared raw storage which are all configured, monitored and operated by three per- sons using advanced IT automation solutions based on Puppet [1], FAI [2] and Capistrano [3]. This paper covers all the aspects in relation to the management of such a complex infrastructure, whether technical or administrative. Most design choices or implemented approaches have been motivated by several years of experience in addressing research needs, mainly in the HPC area but also in complementary services (typically Web-based). In this context, we tried to answer in a flexible and convenient way many technological issues. This experience report may be of interest for other research centers belonging either to the public or the private sector looking for good if not best practices in cluster architecture and management. [less ▲] Detailed reference viewed: 520 (79 UL)![]() Bouvry, Pascal ![]() in IEEE Cloud Computing (2014), 1(1), IEEE Cloud Computing encourages the academic and research communities to provide exciting articles on emerging cloud and adjacent technology trends and their impact on the perception and use of the cloud ... [more ▼] IEEE Cloud Computing encourages the academic and research communities to provide exciting articles on emerging cloud and adjacent technology trends and their impact on the perception and use of the cloud in the short, medium, and long term. [less ▲] Detailed reference viewed: 135 (0 UL)![]() Bertholon, Benoit ![]() ![]() ![]() in Proc. of the 17th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2014), part of the 28th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2014) (2014, May) With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the ... [more ▼] With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle), the protection of the programs at the heart of these services becomes more and more crucial, especially for the companies making business on top of these services. The majority of these services are now using the JavaScript programming language to interact with the user as all modern web browsers – either on desktops, game consoles, tablets or smart phones – include JavaScript interpreters making it the most ubiquitous programming language in history. This context renew the interest of obfuscation techniques, i.e. to render a program "unintelligible" without altering its functionality. The objective is to prevent the reverse-engineering on the program for a certain period of time – an absolute protection by this way being unrealistic since stand-alone obfuscation for arbitrary programs has been proven impossible in 2001. In [11], we have presented JSHADOBF, an obfuscation framework based on evolutionary heuristics designed to optimize for a given input JavaScript program, the sequence of transformations that should be applied to the source code to improve its obfuscation capacity. Measuring this capacity is based on the combination of several metrics optimized simultaneously with Multi-Objective Evolutionary Algorithms (MOEAs). In this paper, we extend and complete the experiments made around JSHADOBF to ana- lyze the impact of the underlying Multi-Objective Evolutionary Algorithms (MOEAs) algorithm onto the obfuscation process. In particular, we compare the performances of NSGA-II and MOEAD (two reference algorithms in the optimization domain) on top of JSHADOBF to first obfuscate a pedagogical program inherited from linear algebra, then one of the most popular and widely used JavaScript library: JQuery. [less ▲] Detailed reference viewed: 273 (4 UL)![]() Varrette, Sébastien ![]() ![]() Presentation (2014, May) Detailed reference viewed: 59 (2 UL)![]() ; Danoy, Grégoire ![]() in 17th European Conference on Applications of Evolutionary Computation (EvoApplications 2014) (2014, April) Detailed reference viewed: 210 (16 UL)![]() Muszynski, Jakub ![]() ![]() ![]() Presentation (2014, March) Detailed reference viewed: 96 (4 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Report (2014) The increasing demand for High Performance Computing (HPC) paired with the higher power requirements of the ever-faster systems has led to the search for both performant and more energy-efficient ... [more ▼] The increasing demand for High Performance Computing (HPC) paired with the higher power requirements of the ever-faster systems has led to the search for both performant and more energy-efficient architectures. This article compares and contrasts the performance and energy efficiency of two modern, traditional Intel Xeon and low power ARM-based clusters, which are tested with the recently developed High Performance Conjugate Gradient (HPCG) benchmark and the ABySS, FASTA and MrBayes bioinformatics applications. We show a higher Performance per Watt valuation of the ARM cluster, and lower energy usage during the tests, which does not offset the much faster job completion rate obtained by the Intel cluster, making the latter more suitable for the considered workloads given the disparity in the performance results. [less ▲] Detailed reference viewed: 195 (23 UL)![]() Jimenez Laredo, Juan Luis ![]() ![]() in Genetic Programming and Evolvable Machines (2014) This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the ... [more ▼] This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the world. Given that resources may join the system only for a limited period of time, the challenge of a volunteer-based evolutionary algorithm is to take advantage of a large amount of computational power that in turn is volatile. The paper analyzes first the speed of convergence of massively parallel evolutionary algorithms. Then, it provides some guidance about how to design efficient policies to overcome the algorithmic loss of quality when the system undergoes high rates of transient failures, i.e. computers fail only for a limited period of time and then become available again. In order to provide empirical evidence, experiments were conducted for two well-known problems which require large population sizes to be solved, the first based on a genetic algorithm and the second on genetic programming. Results show that, in general, evolutionary algorithms undergo a graceful degradation under the stress of losing computing nodes. Additionally, new available nodes can also contribute to improving the search process. Despite losing up to 90% of the initial computing resources, volunteer-based evolutionary algorithms can find the same solutions in a failure-prone as in a failure-free run. [less ▲] Detailed reference viewed: 198 (5 UL)![]() ; ; et al in IEEE International Conference on Cloud Networking (CLOUDNET), Luxembourg City, 2014. (2014) Detailed reference viewed: 152 (5 UL)![]() ; Mehdi, Malika ![]() ![]() in Cluster Computing (2014), 17(2), 205-217 The exact resolution of large instances of combinatorial optimization problems, such as three dimensional quadratic assignment problem (Q3AP), is a real challenge for grid computing. Indeed, it is ... [more ▼] The exact resolution of large instances of combinatorial optimization problems, such as three dimensional quadratic assignment problem (Q3AP), is a real challenge for grid computing. Indeed, it is necessary to reconsider the resolution algorithms and take into account the characteristics of such environments, especially large scale and dynamic availability of resources, and their multi-domain administration. In this paper, we revisit the design and implementation of the branch and bound algorithm for solving large combinatorial optimization problems such as Q3AP on the computational grids. Such gridification is based on new ways to effi- ciently deal with some crucial issues, mainly dynamic adaptive load balancing and fault tolerance. Our new approach allowed the exact resolution on a nation-wide grid of a dif- ficult Q3AP instance. To solve this instance, an average of 1,123 computing cores were used for less than 12 days with a peak of around 3,427 computing cores. [less ▲] Detailed reference viewed: 119 (1 UL)![]() Dorronsoro, Bernabé ![]() ![]() ![]() Book published by John Wiley & Sons (2014) Detailed reference viewed: 243 (15 UL)![]() Guzek, Mateusz ![]() ![]() ![]() in Concurrency and Computation: Practice and Experience (2014) Detailed reference viewed: 232 (31 UL)![]() Pecero, Johnatan ![]() ![]() in Service Oriented Computing ICSOC 2013 Workshops (2014) In this paper, we address energy savings on a Cloud-based opportunistic infrastructure. The infrastructure implements opportunis- tic design concepts to provide basic services, such as virtual CPUs, RAM ... [more ▼] In this paper, we address energy savings on a Cloud-based opportunistic infrastructure. The infrastructure implements opportunis- tic design concepts to provide basic services, such as virtual CPUs, RAM and Disk while profiting from unused capabilities of desktop computer laboratories in a non-intrusive way. We consider the problem of virtual machines consolidation on the oppor- tunistic cloud computing resources. We investigate four workload packing algorithms that place a set of virtual machines on the least number of physical machines to increase resource utilization and to transition parts of the unused resources into a lower power states or switching off. We em- pirically evaluate these heuristics on real workload traces collected from our experimental opportunistic cloud, called UnaCloud. The final aim is to implement the best strategy on UnaCoud. The results show that a consolidation algorithm implementing a policy taking into account fea- tures and constraints of the opportunistic cloud saves energy more than 40% than related consolidation heuristics, over the percentage earned by the opportunistic environment. [less ▲] Detailed reference viewed: 182 (2 UL) |
||