Abstract :
[en] Accurately estimating Remaining Useful Life (RUL) in industrial systems is crucial for optimizing maintenance strategies and extending the lifespan of assets. Data-driven RUL models leverage machine learning (ML) algorithms to extract patterns from operational data, excelling in capturing complex relationships. Despite advancements in RUL prognosis models, the black-box nature of machine learning algorithms poses challenges for industrial users, hindering trust and adoption.
Explainable Artificial Intelligence (XAI) methods offer promising solutions by making complex models transparent and interpretable. This paper focuses on applying XAI methods to enhance trust in machine learning models for RUL prognosis. We emphasize a quantitative assessment of explanation mechanisms, including metrics such as consistency and robustness. Our study contributes to developing more trustworthy and reliable predictive maintenance strategies.
We evaluate XAI methods explaining RUL models applied to a real-world scenario of industrial furnace data. Our findings aim to provide valuable insights for industrial practitioners, guiding them in selecting RUL prognosis techniques.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > SerVal - Security, Reasoning & Validation
Scopus citations®
without self-citations
0