![]() Mohamed Ali, Mohamed Adel ![]() ![]() ![]() in Proceedings of the 17th European Conference on Computer Vision Workshops (ECCVW 2022) (2022) This paper introduces a new cross-domain dataset, CubeSat- CDT, that includes 21 trajectories of a real CubeSat acquired in a labora- tory setup, combined with 65 trajectories generated using two ... [more ▼] This paper introduces a new cross-domain dataset, CubeSat- CDT, that includes 21 trajectories of a real CubeSat acquired in a labora- tory setup, combined with 65 trajectories generated using two rendering engines – i.e. Unity and Blender. The three data sources incorporate the same 1U CubeSat and share the same camera intrinsic parameters. In ad- dition, we conduct experiments to show the characteristics of the dataset using a novel and efficient spacecraft trajectory estimation method, that leverages the information provided from the three data domains. Given a video input of a target spacecraft, the proposed end-to-end approach re- lies on a Temporal Convolutional Network that enforces the inter-frame coherence of the estimated 6-Degree-of-Freedom spacecraft poses. The pipeline is decomposed into two stages; first, spatial features are ex- tracted from each frame in parallel; second, these features are lifted to the space of camera poses while preserving temporal information. Our re- sults highlight the importance of addressing the domain gap problem to propose reliable solutions for close-range autonomous relative navigation between spacecrafts. Since the nature of the data used during training impacts directly the performance of the final solution, the CubeSat-CDT dataset is provided to advance research into this direction. [less ▲] Detailed reference viewed: 101 (19 UL)![]() Mohamed Ali, Mohamed Adel ![]() ![]() ![]() in IEEE Conference on Computer Vision and Pattern Recognition. (2022) Pose estimation enables vision-based systems to refer to their environment, supporting activities ranging from scene navigation to object manipulation. However, end-to-end approaches, that have achieved ... [more ▼] Pose estimation enables vision-based systems to refer to their environment, supporting activities ranging from scene navigation to object manipulation. However, end-to-end approaches, that have achieved state-of-the-art performance in many perception tasks, are still unable to compete with 3D geometry-based methods in pose estimation. Indeed, absolute pose regression has been proven to be more related to image retrieval than to 3D structure. Our assumption is that statistical features learned by classical convolutional neural networks do not carry enough geometrical information for reliably solving this task. This paper studies the use of deep equivariant features for end-to-end pose regression. We further propose a translation and rotation equivariant Convolutional Neural Network whose architecture directly induces representations of camera motions into the feature space. In the context of absolute pose regression, this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations. Therefore, directly learning equivariant features efficiently compensates for learning intermediate representations that are indirectly equivariant yet data-intensive. Extensive experimental validation demonstrates that our lightweight model outperforms existing ones on standard datasets. [less ▲] Detailed reference viewed: 142 (2 UL)![]() Jamrozik, Michele Lynn ![]() ![]() ![]() in Jamrozik, Michele Lynn; Gaudilliere, Vincent; Musallam, Mohamed Adel (Eds.) et al Proceedings of the 73rd International Astronautical Congress (2022) Images generated in space with monocular camera payloads suffer degradations that hinder their utility in precision tracking applications including debris identification, removal, and in-orbit servicing ... [more ▼] Images generated in space with monocular camera payloads suffer degradations that hinder their utility in precision tracking applications including debris identification, removal, and in-orbit servicing. To address the substandard quality of images captured in space and make them more reliable in space object tracking applications, several Image Enhancement (IE) techniques are investigated in this work. In addition, two novel space IE methods were developed. The first method called REVEAL, relies upon the application of more traditional image processing enhancement techniques and assumes a Retinex image formation model. A subsequent method, based on a UNet Deep Learning (DL) model was also developed. Image degradations addressed include blurring, exposure issues, poor contrast, and noise. The shortage of space-generated data suitable for supervised DL is also addressed. A visual comparison of both techniques developed was conducted and compared against the current state-of-the-art in DL-based IE methods relevant to images captured in space. It is determined in this work that both the REVEAL and the UNet-based DL solutions developed are well suited to correct for the degradations most often found in space images. In addition, it has been found that enhancing images in a pre-processing stage facilitates the subsequent extraction of object contours and metrics. By extracting information through image metrics, object properties such as size and orientation that enable more precise space object tracking may be more easily determined. Keywords: Deep Learning, Space, Image Enhancement, Space Debris [less ▲] Detailed reference viewed: 65 (11 UL)![]() Mohamed Ali, Mohamed Adel ![]() ![]() ![]() in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (2021, October) Detailed reference viewed: 82 (24 UL)![]() Ortiz Del Castillo, Miguel ![]() ![]() ![]() Presentation (2021, September) Detailed reference viewed: 111 (21 UL)![]() Garcia Sanchez, Albert ![]() ![]() ![]() in Proceedings of Conference on Computer Vision and Pattern Recognition Workshops (2021, June) Being capable of estimating the pose of uncooperative objects in space has been proposed as a key asset for enabling safe close-proximity operations such as space rendezvous, in-orbit servicing and active ... [more ▼] Being capable of estimating the pose of uncooperative objects in space has been proposed as a key asset for enabling safe close-proximity operations such as space rendezvous, in-orbit servicing and active debris removal. Usual approaches for pose estimation involve classical computer vision-based solutions or the application of Deep Learning (DL) techniques. This work explores a novel DL-based methodology, using Convolutional Neural Networks (CNNs), for estimating the pose of uncooperative spacecrafts. Contrary to other approaches, the proposed CNN directly regresses poses without needing any prior 3D information. Moreover, bounding boxes of the spacecraft in the image are predicted in a simple, yet efficient manner. The performed experiments show how this work competes with the state-of-the-art in uncooperative spacecraft pose estimation, including works which require 3D information as well as works which predict bounding boxes through sophisticated CNNs. [less ▲] Detailed reference viewed: 263 (35 UL)![]() ; Mohamed Ali, Mohamed Adel ![]() ![]() in European Conference on Space Debris (2021), 8(1), Detailed reference viewed: 127 (14 UL)![]() Mohamed Ali, Mohamed Adel ![]() ![]() ![]() in 2021 IEEE International Conference on Image Processing (ICIP) (2021) Detailed reference viewed: 90 (16 UL) |
||