Abstract :
[en] Planetary rovers have limited autonomous navigation capabilities. Delays in communication and terrain assessment significantly restrict the explored area and pose a risk to the mission lifespan. Enhancing autonomy is crucial for efficient exploration without constant human intervention. Scientists have explored various techniques to enable autonomous traversal across unfamiliar terrain. One crucial aspect is the detection and avoidance of obstacles and hazardous environments. Rock detection has been significantly challenging since rocks exist in different colors, shapes, sizes, and textures. This study uses transfer learning on Mask R-CNN to detect natural lunar features such as rocks, pebbles, and craters. The proposed model undergoes evaluation using three distinct cameras: the Ricoh Theta 360 for a panoramic view and the Mint Eye D and ZED 2 for stereo vision capabilities. Furthermore, two varied lighting conditions - full and partial illumination - are assessed, simulating a lunar analog environment. Finally, validation with Yutu-1 PCAM (Chang'e 3) imagery confirms its applicability on the Moon, achieving average detection confidence rates of 90.9% for rocks, 80.15% for pebbles, and 79.35% for craters.
Funding text :
We express our gratitude to Professor Kazuya Yoshida from Tohoku University s SRL for his valuable insights, Assistant Professor Mickael Laine for his constructive critiques, Andrej Orsula from SpaceR, Luxembourg, for his insightful feedback, and Johan Bertrand for his invaluable guidance and constant support throughout this endeavor.
Scopus citations®
without self-citations
2