[en] Orbital debris removal and On-orbit Servicing, Assembly and Manufacturing [OSAM] are the main areas for future robotic space missions. To achieve intelligence and autonomy in these missions and to carry out robot operations, it is essential to have autonomous guidance and navigation, especially vision-based navigation. With recent advances in machine learning, the state-of-the-art Deep Learning [DL] approaches for object detection, and camera pose estimation have advanced to be on par with classical approaches and can be used for target pose estimation during relative navigation scenarios. The state-of-the-art DL-based spacecraft pose estimation approaches are suitable for any known target with significant surface textures. However, it is less applicable in a scenario where the target is a texture-less and symmetric object like rocket nozzles. This paper investigates a novel ellipsoid-based approach combined with convolutional neural networks for texture-less space object pose estimation. Also, this paper presents the dataset for a new texture-less space target, an apogee kick motor, which is used for the study. It includes the synthetic images generated from the simulator developed for rendering synthetic space imagery.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Computer Vision Imaging & Machine Intelligence (CVI²)
Disciplines :
Ingénierie aérospatiale
Auteur, co-auteur :
RATHINAM, Arunkumar ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
GAUDILLIERE, Vincent ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
PAULY, Leo ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
AOUADA, Djamila ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
Pose Estimation of a Known Texture-Less Space Target using Convolutional Neural Networks
Date de publication/diffusion :
septembre 2022
Nom de la manifestation :
73rd International Astronautical Congress
Organisateur de la manifestation :
International Astronautical Federation
Lieu de la manifestation :
Paris, France
Date de la manifestation :
18-22 September 2022
Manifestation à portée :
International
Titre de l'ouvrage principal :
73rd International Astronautical Congress, Paris 18-22 September 2022
Projet FnR :
FNR14755859 - Multi-modal Fusion Of Electro-optical Sensors For Spacecraft Pose Estimation Towards Autonomous In-orbit Operations, 2020 (01/01/2021-31/12/2023) - Djamila Aouada
J. S. Llorente, A. Agenjo, et al., “Proba-3: Precise formation flying demonstration mission,” Acta As-tronautica, vol. 82, no. 1, pp. 38-46, 2013.
P. Bodin, R. Noteborn, et al., “The prisma formation flying demonstrator: Overview and conclusions from the nominal mission,” Advances in the Astronautical Sciences, vol. 144, pp. 441-460, 2012.
E. IET, One satellite services another in orbit for the first time, EN, Apr. 2020.
T. H. Park, S. Sharma, and S. D'Amico, “Towards robust learning-based pose estimation of noncooperative spacecraft,” arXiv preprint arXiv:1909.00392, 2019.
B. Chen, J. Cao, et al., “Satellite pose estimation with deep landmark regression and nonlinear pose refinement,” in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 2816-2824.
S. Sharma and S. D'Amico, “Neural network-based pose estimation for noncooperative spacecraft rendezvous,” IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 6, pp. 4638-4658, 2020.
P. F. Proença and Y. Gao, “Deep learning for spacecraft pose estimation from photorealistic rendering,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2020, pp. 6007-6013.
K. Gerard, “Segmentation-driven satellite pose estimation,” Technical Report, Oct. 2019, p. 9.
A. Garcia, M. A. Musallam, et al., “Lspnet: A 2d localization-oriented spacecraft pose estimation neural network,” in Proceedings of the IEEE/CVF ConferenceonComputerVisionandPatternRecognition 2021 pp. 2048-2056.
A. Rathinam and Y. Gao, “On-orbit relative navigation near a known target using monocular vision and convolutional neural networks for pose estimation,” in International Symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS), Virutal Conference (Pasadena, CA:), 2020, pp. 1-6.
M. Kisantal, S. Sharma, et al., “Satellite pose estimation challenge: Dataset, competition design, and results,” IEEE Trans. Aerosp. Electron. Syst., vol. 56, no. 5, pp. 4083-4098, 2020.
T. H. Park, M. Märtens, et al., “Speed+: Next-generation dataset for spacecraft pose estimation across domain gap,” in 2022 IEEE Aerospace Conference (AERO), 2022, pp. 1-15. doi: 10.1109/AERO53065.2022.9843439.
X. Zhang, Z. Jiang, et al., “Vision-based pose estimation for textureless space objects by contour points matching,” IEEE Transactions on Aerospace and Electronic Systems, vol. 54, no. 5, pp. 2342-2355, 2018. doi: 10.1109/TAES.2018.2815879.
C. Wu, L. Chen, et al., “Pseudo-siamese graph matching network for textureless objects' 6-d pose estimation,” IEEE Transactions on Industrial Electronics, vol. 69, no. 3, pp. 2718-2727, 2022. doi: 10.1109/TIE.2021.3070501.
R. Kaskman, I. Shugurov, et al., “6 dof pose estimation of textureless objects from multiple RGB frames,” in Computer Vision - ECCV 2020 Workshops - Glasgow, UK, August 23-28, 2020, Proceedings, Part II, A. Bartoli and A. Fusiello, Eds., ser. Lecture Notes in Computer Science, vol. 12536, Springer, 2020, pp. 612-630.
D. Wokes and P. Palmer, “Autonomous pose determination of a passive target through spheroid modelling,” in AIAA Guidance, Navigation and Control Conference and Exhibit, 2008, p. 7493.
M. A. Musallam, A. Rathinam, et al., “Cubesatcdt: A cross-domain dataset for 6-dof trajectory estimation of a symmetric spacecraft,” in AI4SPACE workshop, European Conference on Computer Vision (ECCV), 2022, pp. 1-14.
A. Rathinam, Z. Hao, and Y. Gao, “Autonomous visual navigation for spacecraft on-orbit operations,” Space Robotics and Autonomous Systems: Technologies, Advances and Applications, vol. 131, p. 125, 2021.
A. Rathinam, V. Gaudilliere, et al., AKM Dataset: Textureless Space Target Dataset, Sep. 2022. doi: 10.5281/zenodo.7043325.
V. Gaudillière, G. Simon, and M.-O. Berger, Perspective-1-ellipsoid: Formulation, analysis and solutions of the ellipsoid pose estimation problem in euclidean space, 2022. doi: 10.48550/ARXIV. 2208.12513.
D. Eberly, Reconstructing an ellipsoid from its perspective projection onto a plane, https://www.geometrictools.com/, Updated version: March 1, 2008, May 2007.
K. He, X. Zhang, et al., “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778. doi: 10.1109/CVPR. 2016. 90.
L. Bottou, “Stochastic gradient learning in neural networks,” in Proceedings of Neuro-Nîmes 91, Nimes, France: EC2, 1991.
K. Xu, D. Z. Huang, and E. Darve, “Learning constitutive relations using symmetric positive definite neural networks,” Journal of Computational Physics, vol. 428, p. 110 072, 2021. doi: https://doi.org/10.1016/j.jcp.2020.110072.
A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Dec. 2015.