Article (Scientific journals)
A survey on deep learning-based monocular spacecraft pose estimation: Current state, limitations and prospects
PAULY, Leo; RHARBAOUI, Wassim; SHNEIDER, Carl et al.
2023In Acta Astronautica, 212, p. 339-360
Peer Reviewed verified by ORBi
 

Files


Full Text
1-s2.0-S0094576523003995-main.pdf
Publisher postprint (4.47 MB)
Download

© 2023 The Authors. Published by Elsevier Ltd on behalf of IAA. This is an open-access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).


All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Spacecraft pose estimation algorithms; Simulators and testbeds; Domain adaptation
Abstract :
[en] Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling the deployment of automatic vision-based systems in orbit, with applications ranging from on-orbit servicing to space debris removal. Following the general trend in computer vision, more and more works have been focusing on leveraging Deep Learning (DL) methods to address this problem. However and despite promising research-stage results, major challenges preventing the use of such methods in real-life missions still stand in the way. In particular, the deployment of such computation-intensive algorithms is still under-investigated, while the performance drop when training on synthetic and testing on real images remains to mitigate. The primary goal of this survey is to describe the current DL-based methods for spacecraft pose estimation in a comprehensive manner. The secondary goal is to help define the limitations towards the effective deployment of DL-based spacecraft pose estimation solutions for reliable autonomous vision-based applications. To this end, the survey first summarises the existing algorithms according to two approaches: hybrid modular pipelines and direct end-to-end regression methods. A comparison of algorithms is presented not only in terms of pose accuracy but also with a focus on network architectures and models' sizes keeping potential deployment in mind. Then, current monocular spacecraft pose estimation datasets used to train and test these methods are discussed. The data generation methods: simulators and testbeds, the domain gap and the performance drop between synthetically generated and lab/space collected images and the potential solutions are also discussed. Finally, the paper presents open research questions and future directions in the field, drawing parallels with other computer vision applications.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > CVI² - Computer Vision Imaging & Machine Intelligence
Disciplines :
Engineering, computing & technology: Multidisciplinary, general & others
Author, co-author :
PAULY, Leo ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
RHARBAOUI, Wassim ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
SHNEIDER, Carl ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
RATHINAM, Arunkumar  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
GAUDILLIERE, Vincent ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
AOUADA, Djamila  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
External co-authors :
no
Language :
English
Title :
A survey on deep learning-based monocular spacecraft pose estimation: Current state, limitations and prospects
Publication date :
November 2023
Journal title :
Acta Astronautica
ISSN :
1879-2030
Publisher :
Elsevier, Oxford, United Kingdom
Volume :
212
Pages :
339-360
Peer reviewed :
Peer Reviewed verified by ORBi
FnR Project :
FNR14755859 - Multi-modal Fusion Of Electro-optical Sensors For Spacecraft Pose Estimation Towards Autonomous In-orbit Operations, 2020 (01/01/2021-31/12/2023) - Djamila Aouada
Funders :
FNR - Fonds National de la Recherche [LU]
Commentary :
This work was funded by the Luxembourg National Research Fund (FNR), under the projects MEET-A (reference: BRIDGES2020/IS/14755859/MEET-A/Aouada) and ELITE (reference: C21/IS/15965298/ELITE).
Available on ORBilu :
since 01 October 2023

Statistics


Number of views
29 (1 by Unilu)
Number of downloads
24 (0 by Unilu)

Scopus citations®
 
8
Scopus citations®
without self-citations
6

Bibliography


Similar publications



Contact ORBilu