Spacecraft pose estimation algorithms; Simulators and testbeds; Domain adaptation
Abstract :
[en] Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling the deployment of automatic vision-based systems in orbit, with applications ranging from on-orbit servicing to space debris removal. Following the general trend in computer vision, more and more works have been focusing on leveraging Deep Learning (DL) methods to address this problem. However and despite promising research-stage results, major challenges preventing the use of such methods in real-life missions still stand in the way. In particular, the deployment of such computation-intensive algorithms is still under-investigated, while the performance drop when training on synthetic and testing on real images remains to mitigate. The primary goal of this survey is to describe the current DL-based methods for spacecraft pose estimation in a comprehensive manner. The secondary goal is to help define the limitations towards the effective deployment of DL-based spacecraft pose estimation solutions for reliable autonomous vision-based applications. To this end, the survey first summarises the existing algorithms according to two approaches: hybrid modular pipelines and direct end-to-end regression methods. A comparison of algorithms is presented not only in terms of pose accuracy but also with a focus on network architectures and models' sizes keeping potential deployment in mind. Then, current monocular spacecraft pose estimation datasets used to train and test these methods are discussed. The data generation methods: simulators and testbeds, the domain gap and the performance drop between synthetically generated and lab/space collected images and the potential solutions are also discussed. Finally, the paper presents open research questions and future directions in the field, drawing parallels with other computer vision applications.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > CVI² - Computer Vision Imaging & Machine Intelligence
Disciplines :
Engineering, computing & technology: Multidisciplinary, general & others
Author, co-author :
PAULY, Leo ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
RHARBAOUI, Wassim ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
SHNEIDER, Carl ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
RATHINAM, Arunkumar ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
GAUDILLIERE, Vincent ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
AOUADA, Djamila ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
External co-authors :
no
Language :
English
Title :
A survey on deep learning-based monocular spacecraft pose estimation: Current state, limitations and prospects
Publication date :
November 2023
Journal title :
Acta Astronautica
ISSN :
0094-5765
eISSN :
1879-2030
Publisher :
Elsevier, Oxford, United Kingdom
Volume :
212
Pages :
339-360
Peer reviewed :
Peer Reviewed verified by ORBi
FnR Project :
FNR14755859 - Multi-modal Fusion Of Electro-optical Sensors For Spacecraft Pose Estimation Towards Autonomous In-orbit Operations, 2020 (01/01/2021-31/12/2023) - Djamila Aouada
Funders :
FNR - Fonds National de la Recherche
Commentary :
This work was funded by the Luxembourg National Research Fund (FNR), under the projects MEET-A (reference: BRIDGES2020/IS/14755859/MEET-A/Aouada) and ELITE (reference: C21/IS/15965298/ELITE).
H. Jones, The recent large reduction in space launch cost, in: 48th International Conference on Environmental Systems, 2018.
Witze, A., 2022 Was a record year for space launches. Nat. News, 2023 URL: https://www.nature.com/articles/d41586-023-00048-7.
J. Kreisel, On-Orbit servicing of satellites (OOS): its potential market & impact, in: Proceedings of 7th ESA Workshop on Advanced Space Technologies for Robotics and Automation, ASTRA, 2002.
Li, W.J., Cheng, D.Y., Liu, X.G., Wang, Y.B., Shi, W.H., Tang, Z.X., Gao, F., Zeng, F.M., Chai, H.Y., Luo, W.B., et al. On-orbit service (OOS) of spacecraft: A review of engineering developments. Prog. Aerosp. Sci. 108 (2019), 32–120.
Wijayatunga, M.C., Armellin, R., Holt, H., Pirovano, L., Lidtke, A.A., Design and guidance of a multi-active debris removal mission. Astrodynamics, 2023, 10.1007/s42064-023-0159-3 URL: https://link.springer.com/10.1007/s42064-023-0159-3.
May, C., Triggers and effects of an active debris removal market: Tech. Rep., 2021, The Aerospace Corporation, Center for Space Policy and Strategy.
Llorente, J.S., Agenjo, A., Carrascosa, C., de Negueruela, C., Mestreau-Garreau, A., Cropp, A., Santovincenzo, A., PROBA-3: Precise formation flying demonstration mission. Acta Astronaut. 82:1 (2013), 38–46.
Sweden, O., PRISMA. 2023 https://www.ohb-sweden.se/space-missions/prisma. (Accessed 5 April 2023).
Redd, N.T., Bringing satellites back from the dead: Mission extension vehicles give defunct spacecraft a new lease on life - [News]. IEEE Spectr. 57:8 (2020), 6–7, 10.1109/MSPEC.2020.9150540.
R. Biesbroek, S. Aziz, A. Wolahan, S.-f. Cipolla, M. Richard-Noca, L. Piguet, The clearspace-1 mission: ESA and Clearspace team up to remove debris, in: Proc. 8th Eur. Conf. Sp. Debris, 2021, pp. 1–3.
Marullo, G., Tanzi, L., Piazzolla, P., Vezzetti, E., 6D object position estimation from 2D images: a literature review. Multimedia Tools Appl., 2022, 1–39.
K. Park, T. Patten, M. Vincze, Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 7668–7677.
Szeliski, R., Computer Vision: Algorithms and Applications. 2022, Springer Nature.
Huynh, D.Q., Metrics for 3D rotations: Comparison and analysis. J. Math. Imaging Vision 35:2 (2009), 155–164.
Kelsey, J., Byrne, J., Cosgrove, M., Seereeram, S., Mehra, R., Vision-based relative pose estimation for autonomous rendezvous and docking. 2006 IEEE Aerospace Conference, 2006, 20, 10.1109/AERO.2006.1655916.
D'Amico, S., Benn, M., Jørgensen, J.L., Pose estimation of an uncooperative spacecraft from actual space imagery. Int. J. Space Sci. Eng. 2:2 (2014), 171–189.
Cassinis, L.P., Fonod, R., Gill, E., Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft. Prog. Aerosp. Sci., 110, 2019, 100548.
Opromolla, R., Fasano, G., Rufino, G., Grassi, M., A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aerosp. Sci. 93 (2017), 53–72.
Kisantal, M., Sharma, S., Park, T.H., Izzo, D., Märtens, M., D'Amico, S., Satellite pose estimation challenge: Dataset, competition design, and results. IEEE Trans. Aerosp. Electron. Syst. 56:5 (2020), 4083–4098.
Park, T.H., Märtens, M., Jawaid, M., Wang, Z., Chen, B., Chin, T.J., Izzo, D., D'Amico, S., Satellite pose estimation competition 2021: Results and analyses. Acta Astronaut. 204 (2023), 640–665, 10.1016/j.actaastro.2023.01.002 URL: https://www.sciencedirect.com/science/article/pii/S0094576523000048.
Wang, J., Lan, C., Liu, C., Ouyang, Y., Qin, T., Lu, W., Chen, Y., Zeng, W., Yu, P., Generalizing to unseen domains: A survey on domain generalization. IEEE Trans. Knowl. Data Eng., 2022.
Song, J., Rondao, D., Aouf, N., Deep learning-based spacecraft relative navigation methods: A survey. Acta Astronaut. 191 (2022), 22–40.
Voulodimos, A., Doulamis, N., Doulamis, A., Protopapadakis, E., Deep learning for computer vision: A brief review. Comput. Intell. Neurosci., 2018, 2018.
Chai, J., Zeng, H., Li, A., Ngai, E.W., Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl., 6, 2021, 100134.
Wang, W., Yang, Y., Wang, X., Wang, W., Li, J., Development of convolutional neural network and its application in image classification: a survey. Opt. Eng., 58(4), 2019, 040901.
Minaee, S., Boykov, Y.Y., Porikli, F., Plaza, A.J., Kehtarnavaz, N., Terzopoulos, D., Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 2021.
Ciaparrone, G., Sánchez, F.L., Tabik, S., Troiano, L., Tagliaferri, R., Herrera, F., Deep learning in video multi-object tracking: A survey. Neurocomputing 381 (2020), 61–88.
Shi, J., Ulrich, S., Ruel, S., Spacecraft pose estimation using a monocular camera. 67th International Astronautical Congress, 2016, Guadalajara.
Liu, C., Hu, W., Relative pose estimation for cylinder-shaped spacecrafts using single image. IEEE Trans. Aerosp. Electron. Syst. 50:4 (2014), 3036–3056.
D. Rondao, N. Aouf, Multi-view monocular pose estimation for spacecraft relative navigation, in: 2018 AIAA Guidance, Navigation, and Control Conference, 2018, p. 2100.
V. Capuano, S.R. Alimo, A.Q. Ho, S.J. Chung, Robust features extraction for on-board monocular-based spacecraft pose acquisition, in: AIAA Scitech 2019 Forum, 2019, p. 2005.
Rathinam, A., Gaudilliere, V., Mohamed Ali, M.A., Ortiz Del Castillo, M., Pauly, L., Aouada, D., SPARK 2022 Dataset : Spacecraft Detection and Trajectory Estimation. 2022, Zenodo, 10.5281/zenodo.6599762.
Jiao, L., Zhang, F., Liu, F., Yang, S., Li, L., Feng, Z., Qu, R., A survey of deep learning-based object detection. IEEE Access 7 (2019), 128837–128868.
Ren, S., He, K., Girshick, R., Sun, J., Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst., 28, 2015.
K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2961–2969.
J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C., Ssd: Single shot multibox detector. European Conference on Computer Vision, 2016, Springer, 21–37.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I., Attention is all you need. Adv. Neural Inf. Process. Syst., 30, 2017.
Y. Xiong, H. Liu, S. Gupta, B. Akin, G. Bender, Y. Wang, P.J. Kindermans, M. Tan, V. Singh, B. Chen, Mobiledets: Searching for object detection architectures for mobile accelerators, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3825–3834.
Zaidi, S.S.A., Ansari, M.S., Aslam, A., Kanwal, N., Asghar, M., Lee, B., A survey of modern deep learning based object detection models. Digit. Signal Process., 2022, 103514.
Zou, Z., Chen, K., Shi, Z., Guo, Y., Ye, J., Object detection in 20 years: A survey. Proc. IEEE, 2023.
Cosmas, K., Kenichi, A., Utilization of FPGA for onboard inference of landmark localization in CNN-based spacecraft pose estimation. Aerospace, 7(11), 2020, 159.
Huo, Y., Li, Z., Zhang, F., Fast and accurate spacecraft pose estimation from single shot space imagery using box reliability and keypoints existence judgments. IEEE Access 8 (2020), 216283–216297.
Li, K., Zhang, H., Hu, C., Learning-based pose estimation of non-cooperative spacecrafts with uncertainty prediction. Aerospace, 9(10), 2022, 10.3390/aerospace9100592 URL: https://www.mdpi.com/2226-4310/9/10/592.
B. Chen, J. Cao, A. Parra, T.J. Chin, Satellite pose estimation with deep landmark regression and nonlinear pose refinement, in: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019.
A. Price, K. Yoshida, A Monocular Pose Estimation Case Study: The Hayabusa2 Minerva-II2 Deployment, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1992–2001.
Hartley, R., Zisserman, A., Multiple View Geometry in Computer Vision. 2003, Cambridge University Press.
Huan, W., Liu, M., Hu, Q., Pose estimation for non-cooperative spacecraft based on deep learning. 2020 39th Chinese Control Conference, CCC, 2020, IEEE, 3339–3343.
Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 43:10 (2020), 3349–3364.
T.H. Park, S. Sharma, S. D'Amico, Towards robust learning-based pose estimation of noncooperative spacecraft, in: 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, Maine, August 11–15 (2019), 2019.
J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7263–7271.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
Lotti, A., Modenini, D., Tortora, P., Saponara, M., Perino, M.A., Deep learning for real-time satellite pose estimation on tensor processing units. J. Spacecr. Rockets 60:3 (2023), 1034–1038.
Tensorflow, TPU/models/official/efficientnet/lite at master ů Tensorflow/TPU, GitHub, URL: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite.
Tan, M., Le, Q., Efficientnet: Rethinking model scaling for convolutional neural networks. International Conference on Machine Learning, 2019, PMLR, 6105–6114.
Lotti, A., Modenini, D., Tortora, P., Investigating vision transformers for bridging domain gap in satellite pose estimation. International Conference on Applied Intelligence and Informatics, 2022, Springer, 299–314.
Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022.
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, in: International Conference on Learning Representations, 2020.
Y. Hu, S. Speierer, W. Jakob, P. Fua, M. Salzmann, Wide-Depth-Range 6D Object Pose Estimation in Space, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15870–15879.
A. Legrand, R. Detry, C. De Vleeschouwer, End-to-end neural estimation of spacecraft pose with intermediate detection of keypoints.
Y. Hu, J. Hugonot, P. Fua, M. Salzmann, Segmentation-driven 6d object pose estimation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3385–3394.
A. Howard, M. Sandler, G. Chu, L.C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, et al., Searching for mobilenetv3, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1314–1324.
B. Chen, J. Cao, A. Parra, T.J. Chin, Satellite Pose Estimation with Deep Landmark Regression and Nonlinear Pose Refinement, in: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2019.
A. Rathinam, Y. Gao, On-orbit relative navigation near a known target using monocular vision and convolutional neural networks for pose estimation, in: International Symposium on Artificial Intelligence, Robotics and Automation in Space, ISAIRAS, Virutal Conference Pasadena, CA, 2020, pp. 1–6.
Piazza, M., Maestrini, M., Di Lizia, P., et al. Deep learning-based monocular relative pose estimation of uncooperative spacecraft. 8th European Conference on Space Debris, ESA/ESOC, 2021, ESA, 1–13.
B. Cheng, B. Xiao, J. Wang, H. Shi, T.S. Huang, L. Zhang, Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 5386–5395.
Ronneberger, O., Fischer, P., Brox, T., U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015, Springer, 234–241.
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M., Yolov4: Optimal speed and accuracy of object detection. 2020 arXiv preprint arXiv:2004.10934.
T.Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125.
Wang, S., Wang, S., Jiao, B., Yang, D., Su, L., Zhai, P., Chen, C., Zhang, L., CA-SpaceNet: Counterfactual analysis for 6D pose estimation in space. 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2022, IEEE, 10627–10634.
Pearl, J., Mackenzie, D., The Book of Why: The New Science of Cause and Effect. 2018, Basic books.
Marchand, E., Uchiyama, H., Spindler, F., Pose estimation for augmented reality: a hands-on survey. IEEE Trans. Vis. Comput. Graph. 22:12 (2015), 2633–2651.
Fischler, M.A., Bolles, R.C., Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24:6 (1981), 381–395.
Strutz, T., Data Fitting and Uncertainty: A Practical Introduction to Weighted Least Squares and Beyond. 2011, Springer.
Lepetit, V., Moreno-Noguer, F., Fua, P., Epnp: An accurate o (n) solution to the pnp problem. Int. J. Comput. Vis., 81(2), 2009, 155.
Y. Hu, P. Fua, W. Wang, M. Salzmann, Single-stage 6d object pose estimation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2930–2939.
A. Kendall, R. Cipolla, Geometric loss functions for camera pose regression with deep learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5974–5983.
Phisannupawong, T., Kamsing, P., Torteeka, P., Channumsin, S., Sawangwit, U., Hematulin, W., Jarawan, T., Somjit, T., Yooyen, S., Delahaye, D., et al. Vision-based spacecraft pose estimation via a deep convolutional neural network for noncooperative docking operations. Aerospace, 7(9), 2020, 126.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
Proença, P.F., Gao, Y., Deep learning for spacecraft pose estimation from photorealistic rendering. 2020 IEEE International Conference on Robotics and Automation, ICRA, 2020, IEEE, 6007–6013.
Krizhevsky, A., Sutskever, I., Hinton, G.E., ImageNet classification with deep convolutional neural networks. Commun. ACM 60 (2012), 84–90.
Wang, Q., Ma, Y., Zhao, K., Tian, Y., A comprehensive survey of loss functions in machine learning. Ann. Data Sci. 9:2 (2022), 187–212.
S. Sharma, S. D'Amico, Pose estimation for non-cooperative rendezvous using neural networks, in: AIAA/AAS Space Flight Mechanics Meeting, January 2019, 2019.
Huang, H., Zhao, G., Gu, D., Bo, Y., Non-model-based monocular pose estimation network for uncooperative spacecraft using convolutional neural network. IEEE Sens. J. 21:21 (2021), 24579–24590.
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
Posso, J., Bois, G., Savaria, Y., Mobile-URSONet: an embeddable neural network for onboard spacecraft pose estimation. 2022 IEEE International Symposium on Circuits and Systems, ISCAS, 2022, IEEE, 794–798.
Park, T.H., D'Amico, S., Robust multi-task learning and online refinement for spacecraft pose estimation across domain gap. Adv. Space Res., 2023.
Bukschat, Y., Vetter, M., EfficientPose: An efficient, accurate and scalable end-to-end 6D multi object pose estimation approach. 2020 arXiv preprint arXiv:2011.04307.
M. Tan, R. Pang, Q.V. Le, Efficientdet: Scalable and efficient object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10781–10790.
Shannon, C.E., A mathematical theory of communication. Bell Syst. Tech. J. 27:3 (1948), 379–423.
A. Garcia, M.A. Musallam, V. Gaudilliere, E. Ghorbel, K. Al Ismaeil, M. Perez, D. Aouada, Lspnet: A 2d localization-oriented spacecraft pose estimation neural network, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2048–2056.
M.A. Musallam, V. Gaudillière, M.O. del Castillo, K. Al Ismaeil, D. Aouada, Leveraging Equivariant Features for Absolute Pose Regression, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2022, pp. 6876–6886.
A. Kendall, M. Grimes, R. Cipolla, PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization, in: Proceedings of the IEEE International Conference on Computer Vision, ICCV, 2015.
Weiler, M., Cesa, G., General e(2)-equivariant steerable CNNs. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., Garnett, R., (eds.) Advances in Neural Information Processing Systems, Vol. 32, 2019 URL: https://proceedings.neurips.cc/paper_files/paper/2019/file/45d6637b718d0f24a237069fe41b0db4-Paper.pdf.
Sun, K., Zhao, Y., Jiang, B., Cheng, T., Xiao, B., Liu, D., Mu, Y., Wang, X., Liu, W., Wang, J., High-resolution representations for labeling pixels and regions. 2019 arXiv preprint arXiv:1904.04514.
K. Sun, B. Xiao, D. Liu, J. Wang, Deep high-resolution representation learning for human pose estimation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5693–5703.
Redmon, J., Farhadi, A., Yolov3: An incremental improvement. Computer Vision and Pattern Recognition, Vol. 1804, 2018, Springer Berlin/Heidelberg, Germany, 1–6.
Long, X., Deng, K., Wang, G., Zhang, Y., Dang, Q., Gao, Y., Shen, H., Ren, J., Han, S., Ding, E., et al. PP-YOLO: An effective and efficient implementation of object detector. 2020 arXiv preprint arXiv:2007.12099.
Moré, J.J., The Levenberg–Marquardt algorithm: implementation and theory. Numerical Analysis, 1978, Springer, 105–116.
Z. Cai, N. Vasconcelos, Cascade r-cnn: Delving into high quality object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6154–6162.
K. Black, S. Shankar, D. Fonseka, J. Deutsch, A. Dhir, M.R. Akella, Real-time, flight-ready, non-cooperative spacecraft pose estimation using monocular imagery, in: 31st AAS/AIAA Space Flight Mechanics Meeting, February 2021, 2021.
Hou, T., Ahmadyan, A., Zhang, L., Wei, J., Grundmann, M., Mobilepose: Real-time pose estimation for unseen objects with weak shape supervision. 2020 arXiv preprint arXiv:2003.03522.
Ge, Z., Liu, S., Wang, F., Li, Z., Sun, J., Yolox: Exceeding yolo series in 2021. 2021 arXiv preprint arXiv:2107.08430.
S. Hinterstoißer, V. Lepetit, S. Ilic, S. Holzer, G.R. Bradski, K. Konolige, N. Navab, Model Based Training, Detection and Pose Estimation of Texture-Less 3D Objects in Heavily Cluttered Scenes, in: Asian Conference on Computer Vision, 2012.
Agarwal, S., Mierle, K., The Ceres Solver Team, S., Ceres solver. 2022 URL: https://github.com/ceres-solver/ceres-solver.
Wikipedia contributors, S., Phi-Sat-1 — Wikipedia, The free encyclopedia. 2023 URL: https://en.wikipedia.org/w/index.php?title=Phi-Sat-1&oldid=1147216017. (Online accessed 10 July 2023).
Intel, S., Intel powers first satellite with AI on board. 2023 URL: https://www.intel.com/content/www/us/en/newsroom/news/first-satellite-ai.html. (Online accessed 10 July 2023).
eeNews Europe (electronics europe News), S., Space-rated Jetson AI supercomputer in re-entry demonstration. 2023 URL: https://www.eenewseurope.com/en/space-rated-jetson-ai-supercomputer-in-re-entry-demonstration/. (Online accessed 10 July 2023).
Sehgal, A., Kehtarnavaz, N., Guidelines and benchmarks for deployment of deep learning models on smartphones as real-time apps. Mach. Learn. Knowl. Extr. 1:1 (2019), 450–465.
V. Kothari, E. Liberis, N.D. Lane, The final frontier: Deep learning in space, in: Proceedings of the 21st International Workshop on Mobile Computing Systems and Applications, 2020, pp. 45–49.
Chen, J., Ran, X., Deep learning with edge computing: A review. Proc. IEEE 107:8 (2019), 1655–1674.
Lentaris, G., Maragos, K., Stratakos, I., Papadopoulos, L., Papanikolaou, O., Soudris, D., Lourakis, M., Zabulis, X., Gonzalez-Arjona, D., Furano, G., High-performance embedded computing in space: Evaluation of platforms for vision-based navigation. J. Aerosp. Inf. Syst. 15:4 (2018), 178–192.
Ziaja, M., Bosowski, P., Myller, M., Gajoch, G., Gumiela, M., Protich, J., Borda, K., Jayaraman, D., Dividino, R., Nalepa, J., Benchmarking deep learning for on-board space applications. Remote Sens., 13(19), 2021, 3981.
Baller, S.P., Jindal, A., Chadha, M., Gerndt, M., DeepEdgeBench: Benchmarking deep neural networks on edge devices. 2021 IEEE International Conference on Cloud Engineering, IC2E, 2021, IEEE, 20–30.
Hadidi, R., Cao, J., Xie, Y., Asgari, B., Krishna, T., Kim, H., Characterizing the deployment of deep neural networks on commercial edge devices. 2019 IEEE International Symposium on Workload Characterization, IISWC, 2019, IEEE, 35–48.
Xilinx, R., Product guide: DPUCZDX8G for Zynq UltraScale+ MPSoCs. 2022 URL: https://www.xilinx.com/content/dam/xilinx/support/documents/ip_documentation/dpu/v4_0/pg338-dpu.pdf. (Online accessed 30 January 2023).
Furano, G., Meoni, G., Dunne, A., Moloney, D., Ferlet-Cavrois, V., Tavoularis, A., Byrne, J., Buckley, L., Psarakis, M., Voss, K.O., et al. Towards the use of artificial intelligence on the edge in space systems: Challenges and opportunities. IEEE Aerosp. Electron. Syst. Mag. 35:12 (2020), 44–56.
Leon, V., Lentaris, G., Soudris, D., Vellas, S., Bernou, M., Towards employing FPGA and ASIP acceleration to enable onboard AI/ML in space applications. 2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration, VLSI-SoC, 2022, IEEE, 1–4.
Azodi, C.B., Tang, J., Shiu, S.H., Opening the black box: interpretable machine learning for geneticists. Trends Genet. 36:6 (2020), 442–455.
O. Li, H. Liu, C. Chen, C. Rudin, Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32, No. 1, 2018.
Wang, H., Yeung, D.Y., A survey on Bayesian deep learning. ACM Comput. Surv. (CSUR) 53:5 (2020), 1–37.
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L., Imagenet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, IEEE, 248–255.
T.H. Park, S. D'Amico, Adaptive Neural Network-based Unscented Kalman Filter for Spacecraft Pose Tracking at Rendezvous, in: AAS/AIAA Astrodynamics Specialist Conference, 2022.
Lin, T., Maire, M., Belongie, S.J., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L., Microsoft COCO: Common objects in context. Fleet, D.J., Pajdla, T., Schiele, B., Tuytelaars, T., (eds.) Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V Lecture Notes in Computer Science, vol. 8693, 2014, Springer, 740–755, 10.1007/978-3-319-10602-1_48.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A.C., Fei-Fei, L., ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115:3 (2015), 211–252, 10.1007/s11263-015-0816-y.
Song, Y., Wang, T., Cai, P., Mondal, S.K., Sahoo, J.P., A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Comput. Surv., 2022.
Cao, W., Zhou, C., Wu, Y., Ming, Z., Xu, Z., Zhang, J., Research progress of zero-shot learning beyond computer vision. Algorithms and Architectures for Parallel Processing: 20th International Conference, ICA3PP 2020, New York City, NY, USA, October 2–4, 2020, Proceedings, Part II 20, 2020, Springer, 538–551.
Rennie, C., Shome, R., Bekris, K.E., Souza, A.F.D., A dataset for improved RGBD-based object detection and pose estimation for warehouse pick-and-place. IEEE Robotics Autom. Lett. 1:2 (2016), 1179–1185, 10.1109/LRA.2016.2532924.
Xiang, Y., Schmidt, T., Narayanan, V., Fox, D., PoseCNN: A convolutional neural network for 6D object pose estimation in cluttered scenes. Kress-Gazit, H., Srinivasa, S.S., Howard, T., Atanasov, N., (eds.) Robotics: Science and Systems XIV, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, June 26–30, 2018, 2018, 10.15607/RSS.2018.XIV.019 URL: http://www.roboticsproceedings.org/rss14/p19.html.
Pauly, L., Jamrozik, M.L., Del Castillo, M.O., Borgue, O., Singh, I.P., Makhdoomi, M.R., Christidi-Loumpasefski, O.O., Gaudilliere, V., Martinez, C., Rathinam, A., et al. Lessons from a space lab–An image acquisition perspective. 2022 arXiv preprint arXiv:2208.08865.
T.H. Park, J. Bosse, S. D'Amico, Robotic testbed for rendezvous and optical navigation: Multi-source calibration and machine learning use cases, in: 2021 AAS/AIAA Astrodynamics Specialist Conference, Big Sky, Virtual, August 9–11 (2021), 2021.
Sabatini, M., Palmerini, G.B., Gasbarri, P., A testbed for visual based navigation and control during space rendezvous operations. Acta Astronaut. 117 (2015), 184–196, 10.1016/j.actaastro.2015.07.026 URL: https://www.sciencedirect.com/science/article/pii/S0094576515003070.
Fang, Y., Yap, P.T., Lin, W., Zhu, H., Liu, M., Source-free unsupervised domain adaptation: A survey. 2022 arXiv preprint arXiv:2301.00265.
Wang, M., Deng, W., Deep visual domain adaptation: A survey. Neurocomputing 312 (2018), 135–153.
Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L., Microsoft COCO: Common objects in context. Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part V 13, 2014, Springer, 740–755.
European Space Agency (ESA), T.Y., Prisma's tango and mango satellites. 2010 https://www.esa.int/ESA_Multimedia/Images/2010/10/Prisma_s_Tango_and_Mango_satellites. (Accessed 05 April 2023).
V. Gaudillière, L. Pauly, A. Rathinam, A. Garcia Sanchez, M.A. Musallam, D. Aouada, 3D-Aware Object Localization using Gaussian Implicit Occupancy Function, in: IROS 2023 – 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, Detroit, United States, 2023.
Mertan, A., Duff, D.J., Unal, G., Single image depth estimation: An overview. Digit. Signal Process., 2022, 103441.
Y. Wang, X. Shen, S.X. Hu, Y. Yuan, J.L. Crowley, D. Vaufreydaz, Self-supervised transformers for unsupervised object discovery using normalized cut, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14543–14553.
Martin, I., Dunstan, M., Gestido, M.S., Planetary surface image generation for testing future space missions with pangu. 2nd RPI Space Imaging Workshop, 2019, Sensing, Estimation, and Automation Laboratory.
R. Brochard, J. Lebreton, C. Robin, K. Kanani, G. Jonniaux, A. Masson, N. Despré, A. Berjaoui, Scientific image rendering for space scenes with the SurRender software, in: 69th International Astronautical Congress, IAC, Bremen, Germany, 1–5 October 2018, 2018.
Shreiner, D., Group, B.T.K.O.A.W., et al. OpenGL Programming Guide: The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. 2009, Pearson Education.
Rathinam, A., Hao, Z., Gao, Y., Autonomous visual navigation for spacecraft on-orbit operations. Space Robotics and Autonomous Systems: Technologies, Advances and Applications, 2021, Institution of Engineering and Technology, 125–157, 10.1049/PBCE131E_ch5.
M. Bechini, P. Lunghi, M. Lavagna, et al., Spacecraft pose estimation via monocular image processing: Dataset generation and validation, in: 9th European Conference for Aerospace Sciences, EUCASS 2022, 2022, pp. 1–15.
Beierle, C., D'Amico, S., Variable-magnification optical stimulator for training and validation of spaceborne vision-based navigation. J. Spacecr. Rockets 56:4 (2019), 1060–1072.
Colmenarejo, P., Graziano, M., Novelli, G., Mora, D., Serra, P., Tomassini, A., Seweryn, K., Prisco, G., Fernandez, J.G., On ground validation of debris removal technologies. Acta Astronaut. 158 (2019), 206–219, 10.1016/j.actaastro.2018.01.026 URL: https://www.sciencedirect.com/science/article/pii/S0094576517312845.
Benninghoff, H., Rems, F., Risse, E.A., Mietner, C., European proximity operations simulator 2.0 (EPOS) - A robotic-based rendezvous and docking simulator. J. Large-Scale Res. Facil. JLSRF, 3, 2017, 107.
L.P. Cassinis, A. Menicucci, E. Gill, I. Ahrns, J.G. Fernandez, On-ground validation of a cnn-based monocular pose estimation system for uncooperative spacecraft, in: 8th European Conference on Space Debris, Vol. 8, 2021.
P. Lunghi, M. Ciarambino, L. Losi, M. Lavagna, A new experimental facility for testing of vision-based gnc algorithms for planetary landing, in: 10th International ESA Conference on Guidance, Navigation & Control Systems, GNC 2017, 2017.
Lunghi, P., Losi, L., Pesce, V., Lavagna, M., et al. Ground testing of vision-based GNC systems by means of a new experimental facility. International Astronautical Congress: IAC Proceedings, 2018, International Astronautical Federation, IAF, 1–15.
M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A.Y. Ng, et al., ROS: an open-source Robot Operating System, in: ICRA Workshop on Open Source Software, Vol. 3, No. 3.2, Kobe, Japan, 2009, p. 5.
Ben-David, S., Blitzer, J., Crammer, K., Pereira, F., Analysis of representations for domain adaptation. Adv. Neural Inf. Process. Syst., 19, 2006.
Toft, C., Maddern, W., Torii, A., Hammarstrand, L., Stenborg, E., Safari, D., Okutomi, M., Pollefeys, M., Sivic, J., Pajdla, T., et al. Long-term visual localization revisited. IEEE Trans. Pattern Anal. Mach. Intell., 2020.
Mumuni, A., Mumuni, F., Data augmentation: A comprehensive survey of modern approaches. Array, 16, 2022, 100258, 10.1016/j.array.2022.100258 URL: https://www.sciencedirect.com/science/article/pii/S2590005622000911.
X. Peng, Z. Tang, F. Yang, R.S. Feris, D. Metaxas, Jointly optimize data augmentation and network training: Adversarial data augmentation in human pose estimation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2226–2234.
J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, P. Abbeel, Domain randomization for transferring deep neural networks from simulation to the real world, in: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2017, pp. 23–30.
P.T. Jackson, A.A. Abarghouei, S. Bonner, T.P. Breckon, B. Obara, Style augmentation: data augmentation via style randomization, in: CVPR Workshops, Vol. 6, 2019, pp. 10–11.
Ruder, S., An overview of multi-task learning in deep neural networks. 2017 arXiv preprint arXiv:1706.05098.
C. Shui, M. Abbasi, L.É. Robitaille, B. Wang, C. Gagné, A principled approach for learning task similarity in multitask learning, in: Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019, pp. 3446–3452.
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V., Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17:1 (2016), 2030–2096.
C. Zhang, M. Zhang, S. Zhang, D. Jin, Q. Zhou, Z. Cai, H. Zhao, X. Liu, Z. Liu, Delving deep into the generalization of vision transformers under distribution shifts, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7277–7286.
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W., ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. International Conference on Learning Representations, 2019 URL: https://openreview.net/forum?id=Bygh9j09KX.
Kosmidis, L., Rodriguez, I., Jover, Á., Alcaide, S., Lachaize, J., Abella, J., Notebaert, O., Cazorla, F.J., Steenari, D., GPU4S: Embedded GPUs in space-latest project updates. Microprocess. Microsyst., 77, 2020, 103143.
W. Powell, M. Campola, T. Sheets, A. Davidson, S. Welsh, Commercial Off-The-Shelf GPU Qualification for Space Applications, Technical Report, 2018.
Bruhn, F.C., Tsog, N., Kunkel, F., Flordal, O., Troxel, I., Enabling radiation tolerant heterogeneous GPU-based onboard data processing in space. CEAS Space J. 12:4 (2020), 551–564.
Xilinx, A., Vitis AI user guide. 2022 https://www.xilinx.com/content/dam/xilinx/support/documents/sw_manuals/vitis_ai/2_5/ug1414-vitis-ai.pdf. (Online accessed 30 January 2023).
Wistuba, M., Rawat, A., Pedapati, T., A survey on neural architecture search. 2019 arXiv preprint arXiv:1905.01392.
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.Z., XAI—Explainable artificial intelligence. Sci. Robot., 4(37), 2019, eaay7120.
Bai, X., Wang, X., Liu, X., Liu, Q., Song, J., Sebe, N., Kim, B., Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments. Pattern Recognit., 120, 2021, 108102.
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J., Explainable AI: A brief survey on history, research areas, approaches and challenges. Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8, 2019, Springer, 563–574.
Kendall, A., Gal, Y., What uncertainties do we need in bayesian deep learning for computer vision?. Adv. Neural Inf. Process. Syst., 30, 2017.
Shafer, G., Vovk, V., A tutorial on conformal prediction. J. Mach. Learn. Res., 9(3), 2008.
Angelopoulos, A.N., Bates, S., et al. Conformal prediction: A gentle introduction. Found. Trends Mach. Learn. 16:4 (2023), 494–591.
Tibshirani, R.J., Foygel Barber, R., Candes, E., Ramdas, A., Conformal prediction under covariate shift. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., Garnett, R., (eds.) Advances in Neural Information Processing Systems, Vol. 32, 2019, Curran Associates, Inc. URL: https://proceedings.neurips.cc/paper/2019/file/8fb21ee7a2207526da55a679f0332de2-Paper.pdf.
Jawaid, M., Elms, E., Latif, Y., Chin, T.J., Towards bridging the space domain gap for satellite pose estimation using event sensing. 2023 IEEE International Conference on Robotics and Automation, ICRA, 2023, IEEE, 11866–11873.
M. Hogan, D. Rondao, N. Aouf, O. Dubois-Matra, Using Convolutional Neural Networks for Relative Pose Estimation of a Non-Cooperative Spacecraft with Thermal Infrared Imagery, in: European Space Agency Guidance, Navigation and Control Conference 2021, 2021.
Rondao, D., Aouf, N., Richardson, M.A., ChiNet: Deep recurrent convolutional learning for multimodal spacecraft pose estimation. IEEE Trans. Aerosp. Electron. Syst., 2022.
A. Lengyel, S. Garg, M. Milford, J.C. van Gemert, Zero-shot day-night domain adaptation with a physics prior, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4399–4409.
Gou, M., Pan, H., Fang, H.S., Liu, Z., Lu, C., Tan, P., Unseen object 6D pose estimation: a benchmark and baselines. 2022 arXiv preprint arXiv:2206.11808.
K. Park, A. Mousavian, Y. Xiang, D. Fox, Latentfusion: End-to-end differentiable reconstruction and rendering for unseen object pose estimation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10710–10719.
M.A. Musallam, M.O. Del Castillo, K. Al Ismaeil, M.D. Perez, D. Aouada, Leveraging temporal information for 3d trajectory estimation of space objects, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3816–3822.
Musallam, M.A., Rathinam, A., Gaudillière, V., Castillo, M.O.d., Aouada, D., CubeSat-CDT: A cross-domain dataset for 6-DoF trajectory estimation of a symmetric spacecraft. European Conference on Computer Vision, 2022, Springer, 112–126.
A. Beedu, H. Alamri, I. Essa, Video based Object 6D Pose Estimation using Transformers, in: NeurIPS 2022 Workshop on Vision Transformers: Theory and Applications 2022, 2022.
R. Clark, S. Wang, A. Markham, N. Trigoni, H. Wen, Vidloc: A deep spatio-temporal model for 6-dof video-clip relocalization, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 6856–6864.
A. Rathinam, V. Gaudilliere, L. Pauly, D. Aouada, Pose estimation of a known texture-less space target using convolutional neural networks, in: 73rd International Astronautical Congress, Paris 18–22 September 2022, 2022.
Musallam, M.A., Gaudilliere, V., Ghorbel, E., Al Ismaeil, K., Perez, M.D., Poucet, M., Aouada, D., Spacecraft recognition leveraging knowledge of space environment: simulator, dataset, competition design and analysis. 2021 IEEE International Conference on Image Processing Challenges, ICIPC, 2021, IEEE, 11–15.
Adel Musallam, M., Al Ismaeil, K., Oyedotun, O., Damian Perez, M., Poucet, M., Aouada, D., SPARK: SPAcecraft recognition leveraging knowledge of space environment. 2021 arXiv e-prints, arXiv–2104.