Communication publiée dans un ouvrage (Colloques, congrès, conférences scientifiques et actes)
Monitoring and Predicting Hardware Failures in HPC Clusters with FTB-IPMI
Rajachandrasekar, Raghunath; BESSERON, Xavier; Panda, Dhabaleswar K.
2012In Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum
Peer reviewed
 

Documents


Texte intégral
paper.final.pdf
Postprint Auteur (245.91 kB)
Demander un accès

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
Fault detection; Coordinated fault propogation; IPMI; FTB; Clusters
Résumé :
[en] Fault-detection and prediction in HPC clusters and Cloud-computing systems are increasingly challenging issues. Several system middleware such as job schedulers and MPI implementations provide support for both reactive and proactive mechanisms to tolerate faults. These techniques rely on external components such as system logs and infrastructure monitors to provide information about hardware/software failure either through detection, or as a prediction. However, these middleware work in isolation, without disseminating the knowledge of faults encountered. In this context, we propose a light-weight multi-threaded service, namely FTB-IPMI, which provides distributed fault-monitoring using the Intelligent Platform Management Interface (IPMI) and coordinated propagation of fault information using the Fault-Tolerance Backplane (FTB). In essence, it serves as a middleman between system hardware and the software stack by translating raw hardware events to structured software events and delivering it to any interested component using a publish-subscribe framework. Fault-predictors and other decision-making engines that rely on distributed failure information can benefit from FTB-IPMI to facilitate proactive fault-tolerance mechanisms such as preemptive job migration. We have developed a fault-prediction engine within MVAPICH2, an RDMA-based MPI implementation, to demonstrate this capability. Failure predictions made by this engine are used to trigger migration of processes from failing nodes to healthy spare nodes, thereby providing resilience to the MPI application. Experimental evaluation clearly indicates that a single instance of FTB-IPMI can scale to several hundreds of nodes with a remarkably low resource-utilization footprint. A deployment of FTB-IPMI that services a cluster with 128 compute-nodes, sweeps the entire cluster and collects IPMI sensor information on CPU temperature, system voltages and fan speeds in about 0.75 seconds. The average CPU utilization of this service running on a single node is 0.35%.
Disciplines :
Sciences informatiques
Identifiants :
UNILU:UL-CONFERENCE-2012-421
Auteur, co-auteur :
Rajachandrasekar, Raghunath;  Network-Based Computing Laboratory, The Ohio State University
BESSERON, Xavier  ;  University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC)
Panda, Dhabaleswar K.;  Network-Based Computing Laboratory, The Ohio State University
Langue du document :
Anglais
Titre :
Monitoring and Predicting Hardware Failures in HPC Clusters with FTB-IPMI
Date de publication/diffusion :
2012
Nom de la manifestation :
International Workshop on System Management Techniques, Processes, and Services (SMTPS'12), held in conjunction with IPDPS'12
Lieu de la manifestation :
Shanghai, Chine
Date de la manifestation :
May 21-May 25, 2012
Titre de l'ouvrage principal :
Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum
Maison d'édition :
IEEE Computer Society
ISBN/EAN :
978-1-4673-0974-5
Pagination :
1136-1443
Peer reviewed :
Peer reviewed
Focus Area :
Computational Sciences
Disponible sur ORBilu :
depuis le 13 mai 2013

Statistiques


Nombre de vues
165 (dont 9 Unilu)
Nombre de téléchargements
2 (dont 2 Unilu)

citations Scopus®
 
29
citations Scopus®
sans auto-citations
29
citations OpenAlex
 
31

Bibliographie


Publications similaires



Contacter ORBilu