Results 1-5 of 5.
emeras
![]() ; Varrette, Sébastien ![]() ![]() in IEEE Transactions on Cloud Computing (2019), 7(2), 456-468 Abstract—While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High ... [more ▼] Abstract—While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High Performance Computing (HPC) platforms, with the commercial hope to see Cloud Computing (CC) infrastructures to eventually replace in-house facilities. If we exclude the performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when running a HPC workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by analyzing what composes the Total Cost of Ownership (TCO) of an in-house HPC facility operated internally since 2007. This Total Cost of Ownership (TCO) model is then used to compare with the induced cost that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is three-fold. First we propose a theoretical price-performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on the HPC facility TCO analysis we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. Finally, based on the experimental benchmarking on the local cluster and on the Cloud instances we propose an update of the former theoretical price model to reflect the real system performance. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balances the common intuition in favor of Cloud Computing platforms, would they be provided by the reference Cloud provider worldwide. [less ▲] Detailed reference viewed: 96 (5 UL)![]() Ginolhac, Aurélien ![]() ![]() Presentation (2018, June) Detailed reference viewed: 134 (21 UL)![]() Emeras, Joseph ![]() ![]() ![]() in Proc. of the 9th IEEE Intl. Conf. on Cloud Computing (CLOUD 2016) (2016, June) Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as THE solution to most IT problems. While High Performance Computing (HPC) centers continuously ... [more ▼] Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as THE solution to most IT problems. While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, several voices (most probably commercial ones) emit the wish that CC platforms could also serve HPC needs and eventually replace in-house HPC platforms. If we exclude the pure performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when submitted to an High Performance Computing (HPC) workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by evaluating the Total Cost of Ownership (TCO) of the in- house HPC facility we operate since 2007 within the University of Luxembourg (UL), and compare it with the investment that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is two-fold. First we propose a theoretical price - performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on our own cluster TCO and taking into account all the Operating Expense (OPEX), we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balance the common intuition in favor of Cloud Computing (CC) platforms, would they be provided by the reference Cloud provider worldwide. [less ▲] Detailed reference viewed: 390 (19 UL)![]() ; Besseron, Xavier ![]() ![]() in Proc. of the 7th International Supercomputing Conference in Mexico (ISUM 2016) (2016) Detailed reference viewed: 218 (16 UL)![]() Emeras, Joseph ![]() ![]() ![]() in Proc. of the 19th Intl. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP'15), part of the 29th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2015) (2015, May) At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle ... [more ▼] At the advent of a wished (or forced) convergence between High Performance Computing (HPC) platforms, stand-alone accelerators and virtualized resources from Cloud Computing (CC) systems, this ar- ticle unveils the job prediction component of the Evalix project. This framework aims at an improved efficiency of the underlying Resource and Job Management System (RJMS) within heterogeneous HPC facil- ities by the automatic evaluation and characterization of the submitted workload. The objective is not only to better adapt the scheduled jobs to the available resource capabilities, but also to reduce the energy costs. For that purpose, we collected the resource consumption of all the jobs executed on a production cluster for a period of three months. Based on the analysis then on the classification of the jobs, we computed a resource consumption model. The objective is to train a set of predictors based on the aforementioned model, that will give the estimated CPU, mem- ory and IO used by the jobs. The analysis of the resource consumption highlighted that different classes of jobs have different kinds of resource needs and the classification of the jobs enabled to characterize several application patterns of the users. We also discovered that several users whose resource usage on the cluster is considered as too low, are respon- sible for a loss of CPU time on the order of five years over the considered three month period. The predictors, trained from a supervised learning algorithm, were able to correctly classify a large set of data. We evalu- ated them with three performance indicators that gave an information retrieval rate of 71% to 89% and a probability of accurate prediction be- tween 0.7 and 0.8. The results of this work will be particularly helpful for designing an optimal partitioning of the considered heterogeneous plat- form, taking into consideration the real application needs and thus lead- ing to energy savings and performance improvements. Moreover, apart from the novelty of the contribution, the accurate classification scheme offers new insights of users behavior of interest for the design of future HPC platforms. [less ▲] Detailed reference viewed: 285 (17 UL) |
||