References of "Rathinam, Arunkumar 50046173"
     in
Bookmark and Share    
Full Text
See detailPose Estimation of a Known Texture-Less Space Target using Convolutional Neural Networks
Rathinam, Arunkumar UL; Gaudilliere, Vincent UL; Pauly, Leo UL et al

in 73rd International Astronautical Congress, Paris 18-22 September 2022 (2022, September)

Orbital debris removal and On-orbit Servicing, Assembly and Manufacturing [OSAM] are the main areas for future robotic space missions. To achieve intelligence and autonomy in these missions and to carry ... [more ▼]

Orbital debris removal and On-orbit Servicing, Assembly and Manufacturing [OSAM] are the main areas for future robotic space missions. To achieve intelligence and autonomy in these missions and to carry out robot operations, it is essential to have autonomous guidance and navigation, especially vision-based navigation. With recent advances in machine learning, the state-of-the-art Deep Learning [DL] approaches for object detection, and camera pose estimation have advanced to be on par with classical approaches and can be used for target pose estimation during relative navigation scenarios. The state-of-the-art DL-based spacecraft pose estimation approaches are suitable for any known target with significant surface textures. However, it is less applicable in a scenario where the target is a texture-less and symmetric object like rocket nozzles. This paper investigates a novel ellipsoid-based approach combined with convolutional neural networks for texture-less space object pose estimation. Also, this paper presents the dataset for a new texture-less space target, an apogee kick motor, which is used for the study. It includes the synthetic images generated from the simulator developed for rendering synthetic space imagery. [less ▲]

Detailed reference viewed: 111 (10 UL)
Full Text
Peer Reviewed
See detailCubeSat-CDT: A Cross-Domain Dataset for 6-DoF Trajectory Estimation of a Symmetric Spacecraft
Mohamed Ali, Mohamed Adel UL; Rathinam, Arunkumar UL; Gaudilliere, Vincent UL et al

in Proceedings of the 17th European Conference on Computer Vision Workshops (ECCVW 2022) (2022)

This paper introduces a new cross-domain dataset, CubeSat- CDT, that includes 21 trajectories of a real CubeSat acquired in a labora- tory setup, combined with 65 trajectories generated using two ... [more ▼]

This paper introduces a new cross-domain dataset, CubeSat- CDT, that includes 21 trajectories of a real CubeSat acquired in a labora- tory setup, combined with 65 trajectories generated using two rendering engines – i.e. Unity and Blender. The three data sources incorporate the same 1U CubeSat and share the same camera intrinsic parameters. In ad- dition, we conduct experiments to show the characteristics of the dataset using a novel and efficient spacecraft trajectory estimation method, that leverages the information provided from the three data domains. Given a video input of a target spacecraft, the proposed end-to-end approach re- lies on a Temporal Convolutional Network that enforces the inter-frame coherence of the estimated 6-Degree-of-Freedom spacecraft poses. The pipeline is decomposed into two stages; first, spatial features are ex- tracted from each frame in parallel; second, these features are lifted to the space of camera poses while preserving temporal information. Our re- sults highlight the importance of addressing the domain gap problem to propose reliable solutions for close-range autonomous relative navigation between spacecrafts. Since the nature of the data used during training impacts directly the performance of the final solution, the CubeSat-CDT dataset is provided to advance research into this direction. [less ▲]

Detailed reference viewed: 100 (19 UL)