[en] Recent research at CHU Sainte Justine's Pediatric Critical Care Unit (PICU)
has revealed that traditional machine learning methods, such as semi-supervised
label propagation and K-nearest neighbors, outperform Transformer-based models
in artifact detection from PPG signals, mainly when data is limited. This study
addresses the underutilization of abundant unlabeled data by employing
self-supervised learning (SSL) to extract latent features from these data,
followed by fine-tuning on labeled data. Our experiments demonstrate that SSL
significantly enhances the Transformer model's ability to learn
representations, improving its robustness in artifact classification tasks.
Among various SSL techniques, including masking, contrastive learning, and DINO
(self-distillation with no labels)-contrastive learning exhibited the most
stable and superior performance in small PPG datasets. Further, we delve into
optimizing contrastive loss functions, which are crucial for contrastive SSL.
Inspired by InfoNCE, we introduce a novel contrastive loss function that
facilitates smoother training and better convergence, thereby enhancing
performance in artifact classification. In summary, this study establishes the
efficacy of SSL in leveraging unlabeled data, particularly in enhancing the
capabilities of the Transformer model. This approach holds promise for broader
applications in PICU environments, where annotated data is often limited.
Disciplines :
Computer science
Author, co-author :
LE, Thanh-Dung ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom
Language :
English
Title :
Boosting Transformer's Robustness and Efficacy in PPG Signal Artifact Detection with Self-Supervised Learning