Abstract :
[en] Orbital debris removal and On-orbit Servicing, Assembly and Manufacturing [OSAM] are the main areas for future robotic space missions. To achieve intelligence and autonomy in these missions and to carry out robot operations, it is essential to have autonomous guidance and navigation, especially vision-based navigation. With recent advances in machine learning, the state-of-the-art Deep Learning [DL] approaches for object detection, and camera pose estimation have advanced to be on par with classical approaches and can be used for target pose estimation during relative navigation scenarios. The state-of-the-art DL-based spacecraft pose estimation approaches are suitable for any known target with significant surface textures. However, it is less applicable in a scenario where the target is a texture-less and symmetric object like rocket nozzles. This paper investigates a novel ellipsoid-based approach combined with convolutional neural networks for texture-less space object pose estimation. Also, this paper presents the dataset for a new texture-less space target, an apogee kick motor, which is used for the study. It includes the synthetic images generated from the simulator developed for rendering synthetic space imagery.
Scopus citations®
without self-citations
3