![]() Pecero, Johnatan ![]() ![]() in Service Oriented Computing ICSOC 2013 Workshops (2014) In this paper, we address energy savings on a Cloud-based opportunistic infrastructure. The infrastructure implements opportunis- tic design concepts to provide basic services, such as virtual CPUs, RAM ... [more ▼] In this paper, we address energy savings on a Cloud-based opportunistic infrastructure. The infrastructure implements opportunis- tic design concepts to provide basic services, such as virtual CPUs, RAM and Disk while profiting from unused capabilities of desktop computer laboratories in a non-intrusive way. We consider the problem of virtual machines consolidation on the oppor- tunistic cloud computing resources. We investigate four workload packing algorithms that place a set of virtual machines on the least number of physical machines to increase resource utilization and to transition parts of the unused resources into a lower power states or switching off. We em- pirically evaluate these heuristics on real workload traces collected from our experimental opportunistic cloud, called UnaCloud. The final aim is to implement the best strategy on UnaCoud. The results show that a consolidation algorithm implementing a policy taking into account fea- tures and constraints of the opportunistic cloud saves energy more than 40% than related consolidation heuristics, over the percentage earned by the opportunistic environment. [less ▲] Detailed reference viewed: 186 (2 UL)![]() ; Danoy, Grégoire ![]() ![]() in Future Generation Comp. Syst. (2014), 36 Detailed reference viewed: 214 (12 UL)![]() Diaz, Cesar ![]() ![]() ![]() Poster (2014) This poster shows the performance evaluation of UnaCloud Opportunistic Computing IaaS. We analyze from an HPC perspective, two virtualization frameworks Virtual Box and VMware ESXi and compare them over ... [more ▼] This poster shows the performance evaluation of UnaCloud Opportunistic Computing IaaS. We analyze from an HPC perspective, two virtualization frameworks Virtual Box and VMware ESXi and compare them over this particular opportunistic cloud environment. The benchmarks consist of two set of tests, High Performance Linpack and IOzone, that examine the performance and the Input/Output response. The purpose of the experiments is to evaluate the behavior of the different virtual environments over an opportunistic cloud environment and investigate how these are affected by different percentage of end-users. The results show a better performance for Virtual Box than VMware and the other way around for I/O response. Nevertheless, the experiments shows that VBox have more robustness than VMware. [less ▲] Detailed reference viewed: 203 (6 UL)![]() Jimenez Laredo, Juan Luis ![]() ![]() in Cluster Computing (2014) This paper studies a self-organized criticality model called sandpile for dynamically load-balancing tasks arriving in the form of Bag-of-Tasks in large-scale decentralized system. The sandpile is ... [more ▼] This paper studies a self-organized criticality model called sandpile for dynamically load-balancing tasks arriving in the form of Bag-of-Tasks in large-scale decentralized system. The sandpile is designed as a decentralized agent system characterizing a cellular automaton, which works in a critical state at the edge of chaos. Depending on the state of the cellular automaton, different responses may occur when a new task is assigned to a resource: it may change nothing or generate avalanches that reconfigure the state of the system. The abundance of such avalanches is in power-law relation with their sizes, a scale-invariant behavior that emerges without requiring tuning or control parameters. That means that large—catastrophic—avalanches are very rare but small ones occur very often. Such emergent pattern can be efficiently adapted for non-clairvoyant scheduling, where tasks are load balanced in computing resources trying to maximize the performance but without assuming any knowledge on the tasks features. The algorithm design is experimentally validated showing that the sandpile is able to find near-optimal schedules by reacting differently to different conditions of workloads and architectures. [less ▲] Detailed reference viewed: 238 (3 UL)![]() Nielsen, Sune Steinbjorn ![]() ![]() ![]() Scientific Conference (2014) Detailed reference viewed: 198 (33 UL)![]() Nielsen, Sune Steinbjorn ![]() ![]() ![]() Scientific Conference (2014) Detailed reference viewed: 184 (26 UL)![]() ; Bouvry, Pascal ![]() in 4OR: a quarterly journal of operations research (2014), 12(1), 35-48 A customer would like to buy a given set of products in a given set of Internet shops. For each Internet shop, standard prices for the products are known as well as a concave increasing discounting ... [more ▼] A customer would like to buy a given set of products in a given set of Internet shops. For each Internet shop, standard prices for the products are known as well as a concave increasing discounting function of total standard and delivery price. The problem is to buy all the required products at the minimum total discounted price. Computational complexity of various special cases is established. Properties of optimal solutions are proved and polynomial time and exponential time solution algorithms based on these properties are designed. Two heuristic algorithms are suggested and computationally tested. [less ▲] Detailed reference viewed: 141 (2 UL)![]() ; ; Bouvry, Pascal ![]() in Was, Jaroslaw; Sirakoulis, Georgios; Bandini, Stefania (Eds.) Cellular Automata: 11th International Conference on Cellular Automata for Research and Industry, ACRI 2014 Krakow, Poland, September 22–25, 2014, Proceedings (2014) In this paper, we propose a novel distributed algorithm based on Graph Cellular Automata (GCA) concept to solve Maximum Lifetime Coverage Problem (MLCP) in Wireless Sensor Networks (WSNs). The proposed ... [more ▼] In this paper, we propose a novel distributed algorithm based on Graph Cellular Automata (GCA) concept to solve Maximum Lifetime Coverage Problem (MLCP) in Wireless Sensor Networks (WSNs). The proposed algorithm possesses all advantages of localized algorithm, i.e. using only some knowledge about the neighbors, WSN is able to self-organize in such a way to prolong its lifetime preserving at the same time required coverage ratio of a target field. The paper presents results of experimental study of the proposed algorithm and comparison of them with a centralized genetic algorithm. [less ▲] Detailed reference viewed: 132 (0 UL)![]() ; Mehdi, Malika ![]() ![]() in Cluster Computing (2014), 17(2), 205-217 The exact resolution of large instances of combinatorial optimization problems, such as three dimensional quadratic assignment problem (Q3AP), is a real challenge for grid computing. Indeed, it is ... [more ▼] The exact resolution of large instances of combinatorial optimization problems, such as three dimensional quadratic assignment problem (Q3AP), is a real challenge for grid computing. Indeed, it is necessary to reconsider the resolution algorithms and take into account the characteristics of such environments, especially large scale and dynamic availability of resources, and their multi-domain administration. In this paper, we revisit the design and implementation of the branch and bound algorithm for solving large combinatorial optimization problems such as Q3AP on the computational grids. Such gridification is based on new ways to effi- ciently deal with some crucial issues, mainly dynamic adaptive load balancing and fault tolerance. Our new approach allowed the exact resolution on a nation-wide grid of a dif- ficult Q3AP instance. To solve this instance, an average of 1,123 computing cores were used for less than 12 days with a peak of around 3,427 computing cores. [less ▲] Detailed reference viewed: 122 (1 UL)![]() ; ; Tantar, Alexandru-Adrian ![]() Book published by Springer (2014) Detailed reference viewed: 152 (3 UL)![]() Fiandrino, Claudio ![]() ![]() ![]() in IEEE Global Communications Conference, Austin, TX, USA, 2014 (2014) The popularity of cloud applications surged in the last years. Billions of mobile devices remain always connected. Location services, online games, social networking and navigation are a just few examples ... [more ▼] The popularity of cloud applications surged in the last years. Billions of mobile devices remain always connected. Location services, online games, social networking and navigation are a just few examples of “always on” cloud applications in which the same or partially overlapping content is delivered to multiple users. In this paper, we propose a technique, called NCCELL, which uses network coding to foster content distribution in mobile cellular networks. Specifically, NC-CELL implements a software module at mobile base stations (or eNodeBs) which cans in transit traffic and looks for opportunities to code packets destined to different mobile users together. The proposed approach can significantly improve cell throughput and is particularly relevant for delay tolerant content distribution. [less ▲] Detailed reference viewed: 271 (34 UL)![]() ; ; Kliazovich, Dzmitry ![]() in International Conference on Information Networking (ICOIN), Phuket, Thailand, 2014 (2014) Detailed reference viewed: 214 (8 UL)![]() ; ; et al in IEEE International Conference on Cloud Networking (CLOUDNET), Luxembourg City, 2014. (2014) Detailed reference viewed: 156 (5 UL)![]() Jimenez Laredo, Juan Luis ![]() ![]() ![]() in The 14th European Conference on Evolutionary Computation in Combinatorial Optimisation (2014) This paper analyzes the dynamics of a new selection scheme based on altruistic cooperation between individuals. The scheme, which we refer to as cooperative selection, extends from tournament selection ... [more ▼] This paper analyzes the dynamics of a new selection scheme based on altruistic cooperation between individuals. The scheme, which we refer to as cooperative selection, extends from tournament selection and imposes a stringent restriction on the mating chances of an individual during its lifespan: winning a tournament entails a depreciation of its fitness value. We show that altruism minimizes the loss of genetic diversity while increasing the selection frequency of the fittest individuals. An additional contribution of this paper is the formulation of a new combinatorial problem for maximizing the similarity of proteins based on their secondary structure. We conduct experiments on this problem in order to validate cooperative selection. The new selection scheme outperforms tournament selection for any setting of the parameters and is the best trade-off, maximizing genetic diversity and minimizing computational efforts. [less ▲] Detailed reference viewed: 238 (35 UL)![]() Nguyen, Anh Quan ![]() ![]() ![]() Scientific Conference (2013, December 18) In this paper, we present a new energy efficiency model and architecture for cloud management based on a prediction model with Gaussian Mixture Models. The methodology relies on a distributed agent model ... [more ▼] In this paper, we present a new energy efficiency model and architecture for cloud management based on a prediction model with Gaussian Mixture Models. The methodology relies on a distributed agent model and the validation will be performed on OpenStack. This paper intends to be a position paper, the implementation and experimental run will be conducted in future work. The design concept leverages the prediction model by providing a full architecture binding the resource demands, the predictions and the actual cloud environment (Openstack). The prediction analysis feeds the power-aware agents that run on the compute nodes in order to turn the nodes into sleep mode when the load state is low to reduce the energy consumption of the data center. [less ▲] Detailed reference viewed: 226 (15 UL)![]() Diaz, Cesar ![]() ![]() ![]() in Journal of Supercomputing (2013) Forheterogeneousdistributedcomputingsystems,importantdesignissues are scalability and system optimization. Given such systems, it is crucial to develop low computational complexity algorithms to schedule ... [more ▼] Forheterogeneousdistributedcomputingsystems,importantdesignissues are scalability and system optimization. Given such systems, it is crucial to develop low computational complexity algorithms to schedule tasks in a manner that exploits the heterogeneity of the resources and applications. In this paper, we report and evalu- ate three scalable, and fast scheduling heuristics for highly heterogeneous distributed computing systems. We conduct a comprehensive performance evaluation study us- ing simulation. The benchmarking outlines the performance of the schedulers, rep- resenting scalability, makespan, flowtime, computational complexity, and memory utilization. The set of experimental results shows that our heuristics perform as good as the traditional approaches, for makespan and flowtime, while featuring lower com- plexity, lower running time, and lower used memory. The experimental results also detail the various scenarios under which certain algorithms excel and fail. [less ▲] Detailed reference viewed: 152 (0 UL)![]() ; ; Pinel, Frédéric ![]() in Proc. of the 3rd Intl. Conf. on Cloud and Green Computing (CGC'13) (2013, October) Detailed reference viewed: 162 (1 UL)![]() Varrette, Sébastien ![]() ![]() ![]() in Proc. of the 25th Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2013) (2013, October) With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting green approaches with higher energy efficiency. In particular, virtualization ... [more ▼] With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting green approaches with higher energy efficiency. In particular, virtualization is emerging as the prominent approach to mutualize the energy consumed by a single server running multiple VMs instances. Even today, it remains unclear whether the overhead induced by virtualization and the corresponding hypervisor middleware suits an environment as high-demanding as an HPC platform. In this paper, we analyze from an HPC perspective the three most widespread virtualization frameworks, namely Xen, KVM, and VMware ESXi and compare them with a baseline environment running in native mode. We performed our experiments on the Grid’5000 platform by measuring the results of the reference HPL benchmark. Power measures were also performed in parallel to quantify the potential energy efficiency of the virtualized environments. In general, our study offers novel incentives toward in-house HPC platforms running without any virtualized frameworks. [less ▲] Detailed reference viewed: 238 (24 UL)![]() ; ; Bouvry, Pascal ![]() in Future Generation Computer Systems (2013), 29(8), 1885-1900 This paper presents a novel heuristic approach, named JDS-HNN, to simultaneously schedule jobs and replicate data files to different entities of a grid system so that the overall makespan of executing all ... [more ▼] This paper presents a novel heuristic approach, named JDS-HNN, to simultaneously schedule jobs and replicate data files to different entities of a grid system so that the overall makespan of executing all jobs as well as the overall delivery time of all data files to their dependent jobs is concurrently minimized. JDS-HNN is inspired by a natural distribution of a variety of stones among different jars and utilizes a Hopfield Neural Network in one of its optimization stages to achieve its goals. The performance of JDS-HNN has been measured by using several benchmarks varying from medium- to very-large-sized systems. JDS-HNN’s results are compared against the performance of other algorithms to show its superiority under different working conditions. These results also provide invaluable insights into scheduling and replicating dependent jobs and data files as well as their performance related issues for various grid environments. [less ▲] Detailed reference viewed: 254 (0 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Report (2013) Building fast software in an HPC environment raises great challenges as software used for simulation and modelling is generally complex and has many dependencies. Current approaches involve manual tuning ... [more ▼] Building fast software in an HPC environment raises great challenges as software used for simulation and modelling is generally complex and has many dependencies. Current approaches involve manual tuning of compilation parameters in order to minimize the run time, based on a set of predefined defaults, but such an approach involves expert knowledge, is not scalable and can be very expensive in person-hours. In this paper we propose and develop a modular framework called POHPC that uses the Simulated Annealing meta-heuristic algorithm to automatically search for the optimal set of library options and compilation flags that can give the best execution time for a library-application pair on a selected hardware architecture. The framework can be used in modern HPC clusters using a variety of batch scheduling systems as execution backends for the optimization runs, and will discover optimal combinations as well as invalid sets of options and flags that result in failed builds or application crashes. We demonstrate the optimization of the FFTW library working in conjunction with the high- profile community codes GROMACS and QuantumESPRESSO, whereby the suitability of the technique is validated. [less ▲] Detailed reference viewed: 219 (32 UL) |
||