Communication publiée sur un site web (Colloques, congrès, conférences scientifiques et actes)
How Effective is Pre-training of Large Masked Autoencoders for Downstream Earth Observation Tasks?
SOSA MARTINEZ, Jose Angel; Aloulou, Mohamed; RUKHOVICH, Danila et al.
2024The 35th British Machine Vision Conference
Editorial reviewed
 

Documents


Texte intégral
paper5.pdf
Postprint Auteur (942.43 kB)
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
Computer Science - Computer Vision and Pattern Recognition
Résumé :
[en] Self-supervised pre-training has proven highly effective for many computer vision tasks, particularly when labelled data are scarce. In the context of Earth Observation (EO), foundation models and various other Vision Transformer (ViT)-based approaches have been successfully applied for transfer learning to downstream tasks. However, it remains unclear under which conditions pre-trained models offer significant advantages over training from scratch. In this study, we investigate the effectiveness of pre-training ViT-based Masked Autoencoders (MAE) for downstream EO tasks, focusing on reconstruction, segmentation, and classification. We consider two large ViT-based MAE pre-trained models: a foundation model (Prithvi) and SatMAE. We evaluate Prithvi on reconstruction and segmentation-based downstream tasks, and for SatMAE we assess its performance on a classification downstream task. Our findings suggest that pre-training is particularly beneficial when the fine-tuning task closely resembles the pre-training task, e.g. reconstruction. In contrast, for tasks such as segmentation or classification, training from scratch with specific hyperparameter adjustments proved to be equally or more effective.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > CVI² - Computer Vision Imaging & Machine Intelligence
Disciplines :
Sciences informatiques
Auteur, co-auteur :
SOSA MARTINEZ, Jose Angel ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Aloulou, Mohamed
RUKHOVICH, Danila ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Sleimi, Rim
CHANGAIVAL, Boonyarit ;  University of Luxembourg
KACEM, Anis  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
AOUADA, Djamila  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
How Effective is Pre-training of Large Masked Autoencoders for Downstream Earth Observation Tasks?
Date de publication/diffusion :
27 novembre 2024
Nom de la manifestation :
The 35th British Machine Vision Conference
Organisateur de la manifestation :
British Machine Vision Association (BMVA)
Lieu de la manifestation :
Glasgow, Royaume-Uni
Date de la manifestation :
25 to 28 November 2024
Manifestation à portée :
International
Peer reviewed :
Editorial reviewed
Projet FnR :
HPC_BRIDGES/2022/17978225/AI4CC
Disponible sur ORBilu :
depuis le 08 janvier 2025

Statistiques


Nombre de vues
135 (dont 16 Unilu)
Nombre de téléchargements
45 (dont 3 Unilu)

Bibliographie


Publications similaires



Contacter ORBilu