Reference : Learning to Grasp on the Moon from 3D Octree Observations with Deep Reinforcement Learning
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Aerospace & aeronautics engineering
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/51908
Learning to Grasp on the Moon from 3D Octree Observations with Deep Reinforcement Learning
English
Orsula, Andrej mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Space Robotics >]
Bøgh, Simon [Aalborg University > Department of Materials and Production > Robotics and Automation]
Olivares Mendez, Miguel Angel [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Space Robotics >]
Martinez Luna, Carol [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Space Robotics >]
23-Oct-2022
Proceedings of 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Yes
International
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
23/10/2022 → 27/10/2022
Kyoto
Japan
[en] Space Robotics and Automation ; Reinforcement Learning ; Deep Learning in Grasping and Manipulation
[en] Extraterrestrial rovers with a general-purpose robotic arm have many potential applications in lunar and planetary exploration. Introducing autonomy into such systems is desirable for increasing the time that rovers can spend gathering scientific data and collecting samples. This work investigates the applicability of deep reinforcement learning for vision-based robotic grasping of objects on the Moon. A novel simulation environment with procedurally-generated datasets is created to train agents under challenging conditions in unstructured scenes with uneven terrain and harsh illumination. A model-free off-policy actor-critic algorithm is then employed for end-to-end learning of a policy that directly maps compact octree observations to continuous actions in Cartesian space. Experimental evaluation indicates that 3D data representations enable more effective learning of manipulation skills when compared to traditionally used image-based observations. Domain randomization improves the generalization of learned policies to novel scenes with previously unseen objects and different illumination conditions. To this end, we demonstrate zero-shot sim-to-real transfer by evaluating trained agents on a real robot in a Moon-analogue facility.
Researchers ; Professionals ; Students
http://hdl.handle.net/10993/51908
10.1109/IROS47612.2022.9981661
10.48550/arXiv.2208.00818
https://github.com/AndrejOrsula/drl_grasping
https://youtube.com/watch?v=FZSoOkK6VFc

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
IROS22_1460.pdfFinal submissionAuthor postprint6.1 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.