Doctoral thesis (Dissertations and theses)
Deep Reinforcement Learning Control for Autonomous Robots Mobility in Highly Uncertain Environments
EL HARIRY, Mhamed Matteo
2025
 

Files


Full Text
phd_thesis_matteo_correct.pdf
Author postprint (39.55 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Deep Reinforcement Learning, AI, Autonomous Robotics, Space Robotics, Space Exploration
Abstract :
[en] This thesis explores the use of deep reinforcement learning (DRL) for enabling robust, autonomous control of robotic systems operating in highly uncertain environments. Motivated by space applications and the need for generalizable learning pipelines, we develop a series of simulation frameworks and experimental platforms that progressively expand the scope, realism, and generalization capability of DRL-based controllers. We begin by introducing GPU-accelerated simulation tools tailored to spacecraft-like dy- namics (RANS), showing that physically grounded models and disturbance injection can yield transferable control policies. These findings are validated through DRIFT, a framework pre- senting an holonomic floating platform testbed where learned controllers achieve sub-centimeter trajectory tracking despite stochastic disturbances. Building on this, we propose RoboRAN, a modular IsaacLab-based framework that decouples robot and task specifications, enabling reproducible training across diverse platforms such as ground robots, USVs, and microgravity analogs. Sim-to-real evaluations confirm the framework’s effectiveness for low-level policy transfer. Finally, FALCON-S broadens this research direction to fixed-wing platforms in ground-effect regimes by integrating a full 6-DoF aerodynamic model, actuator dynamics, and unified CPU–GPU backends. The framework accommodates both learning-based and classical control schemes, allowing systematic benchmarking, ablation studies, and cross-validation. Together, these contributions demonstrate that DRL can be scaled, generalized, and validated across a range of robotic platforms, provided that simulation fidelity, modularity, and hardware alignment are preserved. Additional studies explore visual policy learning for spacecraft inspection and sensor-driven estimation for satellite angular dynamics, broadening the thesis impact. We conclude by outlining directions toward continual learning, sim-to-real-to-sim adaptation, and integrated world model architectures for real-world deployment.
Disciplines :
Computer science
Author, co-author :
EL HARIRY, Mhamed Matteo  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Space Robotics
Language :
English
Title :
Deep Reinforcement Learning Control for Autonomous Robots Mobility in Highly Uncertain Environments
Defense date :
2025
Institution :
Unilu - University of Luxembourg, Luxembourg
Degree :
Docteur en Informatique (DIP_DOC_0006_B)
Promotor :
OLIVARES MENDEZ, Miguel Angel ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Space Robotics
MARTINEZ LUNA, Carol  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Space Robotics
VOOS, Holger  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation
Matthieu Geist
Simon Bøgh
Available on ORBilu :
since 25 December 2025

Statistics


Number of views
43 (12 by Unilu)
Number of downloads
61 (5 by Unilu)

Bibliography


Similar publications



Contact ORBilu