![]() ; Varrette, Sébastien ![]() ![]() in 20th IEEE/ACM Intl. Symp. on Cluster, Cloud and Internet Computing (CCGrid'20) (2020, May) With renewed global interest for Artificial Intelligence (AI) methods, the past decade has seen a myriad of new programming models and tools that enable better and faster Machine Learning (ML). More ... [more ▼] With renewed global interest for Artificial Intelligence (AI) methods, the past decade has seen a myriad of new programming models and tools that enable better and faster Machine Learning (ML). More recently, a subset of ML known as Deep Learning (DL) raised an increased interest due to its inherent ability to tackle efficiently novel cognitive computing applications. DL allows computational models that are composed of multiple processing layers to learn in an automated way representations of data with multiple levels of abstraction, and can deliver higher predictive accuracy when trained on larger data sets. Based on Artificial Neural Networks (ANN), DL is now at the core of state of the art voice recognition systems (which enable easy control over e.g. Internet-of- Things (IoT) smart home appliances for instance), self-driving car engine, online recommendation systems. The ecosystem of DL frameworks is fast evolving, as well as the DL architectures that are shown to perform well on specialized tasks and to exploit GPU accelerators. For this reason, the frequent performance evaluation of the DL ecosystem is re- quired, especially since the advent of novel distributed training frameworks such as Horovod allowing for scalable training across multiple computing resources. In this paper, the scalability evaluation of the reference DL frameworks (Tensorflow, Keras, MXNet, and PyTorch) is performed over up-to-date High Performance Comput- ing (HPC) resources to compare the efficiency of differ- ent implementations across several hardware architectures (CPU and GPU). Experimental results demonstrate that the DistributedDataParallel features in the Pytorch library seem to be the most efficient framework for distributing the training process across many devices, allowing to reach a throughput speedup of 10.11 when using 12 NVidia Tesla V100 GPUs when training Resnet44 on the CIFAR10 dataset. [less ▲] Detailed reference viewed: 171 (14 UL)![]() Plugaru, Valentin ![]() ![]() Presentation (2019, June 20) Detailed reference viewed: 129 (0 UL)![]() ; Varrette, Sébastien ![]() ![]() in IEEE Transactions on Cloud Computing (2019), 7(2), 456-468 Abstract—While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High ... [more ▼] Abstract—While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High Performance Computing (HPC) platforms, with the commercial hope to see Cloud Computing (CC) infrastructures to eventually replace in-house facilities. If we exclude the performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when running a HPC workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by analyzing what composes the Total Cost of Ownership (TCO) of an in-house HPC facility operated internally since 2007. This Total Cost of Ownership (TCO) model is then used to compare with the induced cost that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is three-fold. First we propose a theoretical price-performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on the HPC facility TCO analysis we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. Finally, based on the experimental benchmarking on the local cluster and on the Cloud instances we propose an update of the former theoretical price model to reflect the real system performance. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balances the common intuition in favor of Cloud Computing platforms, would they be provided by the reference Cloud provider worldwide. [less ▲] Detailed reference viewed: 96 (5 UL)![]() Varrette, Sébastien ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 103 (5 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 84 (15 UL)![]() Varrette, Sébastien ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 106 (3 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 76 (0 UL)![]() Diehl, Sarah ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 92 (3 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 127 (7 UL)![]() Parisot, Clément ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 89 (10 UL)![]() Cartiaux, Hyacinthe ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 138 (11 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 114 (5 UL)![]() Varrette, Sébastien ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 95 (2 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 75 (3 UL)![]() Varrette, Sébastien ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 74 (3 UL)![]() Plugaru, Valentin ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 77 (3 UL)![]() Ginolhac, Aurélien ![]() ![]() Presentation (2018, June) Detailed reference viewed: 134 (21 UL)![]() Parisot, Clément ![]() ![]() ![]() Presentation (2018, June) Detailed reference viewed: 90 (4 UL)![]() Bouvry, Pascal ![]() ![]() ![]() Presentation (2018, April) Detailed reference viewed: 179 (14 UL)![]() Varrette, Sébastien ![]() ![]() ![]() Presentation (2017, November) Detailed reference viewed: 81 (2 UL) |
||