Doctoral thesis (Dissertations and theses)
Enhancing Robots’ Situational Awareness using Imperceptible Artificial Landmarks
TOURANI, Ali
2026
 

Files


Full Text
Thesis.pdf
Author postprint (51.5 MB) Creative Commons License - Attribution
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Computer Vision; Robotics; Situational Awareness; Visual SLAM; SLAM; Mobile Robots; Fiducial Marker
Abstract :
[en] Fiducial markers have long served as highly distinguishable geometric and visual anchors for establishing correspondences between 3D world points and their 2D image projections. Considering their unique features, they are widely used across augmented and mixed reality, computer vision, and robotics. In robotics specifically, fiducial markers play a central role in visual sensor calibration, sensor synchronization, Human-Robot Interaction, and Visual SLAM (VSLAM). As robotic systems grow more capable and operate in increasingly diverse environments, these information-rich artificial landmarks remain a practical and reliable means of improving robustness, interpretability, and overall system performance. However, despite these advantages, a fundamental challenge emerges when envisioning environments populated with numerous fiducial markers. If every surface were required to host visually prominent markers, the environment would rapidly become cluttered, intrusive, and aesthetically undesirable for human occupants. This trade-off between the visibility required for robot-friendly perception and the unobtrusiveness expected in human-centric spaces forms the central motivation of this thesis. Accordingly, this thesis investigates the development of iMarkers, unobtrusive, non-distracting, and ideally invisible fiducial markers that remain reliably detectable by robots. The central idea behind iMarkers is to replace traditional printed pigments with microscopic Cholesteric Spherical Reflector (CSR) shells, enabling information to be embedded directly onto surfaces without altering their human-facing appearance. The primary goal of the thesis is to design, employ, and evaluate these imperceptible markers, and to determine whether they can enhance robotic situational awareness by providing semantic and geometric cues without compromising the aesthetics of real-world environments. To address this, the author has appended five contributed papers that span fabrication considerations, detector sensor designs, the corresponding implemented algorithms, and their potential contributions to robots’ situational awareness. As a concrete robotics case study, the thesis first investigates how classic fiducial markers can be elevated to support higher-level semantic reasoning. By placing printed markers on indoor environments’ walls, a marker-aware Visual SLAM system is developed that converts simple geometric cues into meaningful spatial entities, such as walls, corridors, and rooms. This demonstrates that even lightweight, non-learning-based signals can enrich a robot’s understanding of its environment. Building on this, the thesis introduces hierarchical scene graphs that organize keyframes, map points, markers, and structural components into a multi-layer representation. This hierarchy enables the generation of human-interpretable digital twins, bridging the gap between raw sensor data and structured spatial knowledge. The thesis then presents the first robotics-oriented integration of iMarkers, showing that these imperceptible fiducial markers can be detected, decoded, and semantically assigned within a VSLAM pipeline. Despite being available only in small quantities during fabrication, the experiments show that iMarkers offer the same semantic utility as classical markers, validating their use for unobtrusive mapping and augmented indoor environments. Finally, all earlier insights converge into the thesis’s main contribution: vS-Graphs, a robust, scalable, and marker-optional RGB-D VSLAM framework. vS-Graphs tightly couples visual perception, geometric SLAM, and semantic scene understanding. It introduces parallel threads to detect building components (walls and ground surfaces) and to infer structural elements (rooms and floors), and integrates them into a unified, optimizable 3D scene graph. This representation provides robots with rich, consistent, and actionable situational awareness—whether or not artificial landmarks are present—paving the way toward more transparent, adaptive, and human-aligned environment modeling. Overall, this thesis shows that robots can attain robust situational awareness by integrating geometric reasoning, semantic understanding, and artificial cues such as iMarkers. The proposed methods contribute to more adaptive, explainable, and human-aligned representations of the environment, ultimately improving environment modeling and digital twin generation in robotics.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > ARG - Automation & Robotics
Disciplines :
Computer science
Author, co-author :
TOURANI, Ali  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation
Language :
English
Title :
Enhancing Robots’ Situational Awareness using Imperceptible Artificial Landmarks
Defense date :
20 March 2026
Number of pages :
191
Institution :
Unilu - University of Luxembourg [Faculty of Science, Technology and Medicine], Luxembourg, Luxembourg
Degree :
Docteur en Informatique (DIP_DOC_0006_B)
Promotor :
VOOS, Holger  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation
LAGERWALL, Jan  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Physics and Materials Science (DPHYMS)
SANCHEZ LOPEZ, Jose Luis  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation
MUNOZ-SALINAS, Rafael
OSWALD, Martin
Development Goals :
9. Industry, innovation and infrastructure
Name of the research project :
U-AGR-6004 - IAS-AUDACITY TRANSCEND - LAGERWALL Jan
Available on ORBilu :
since 01 April 2026

Statistics


Number of views
50 (13 by Unilu)
Number of downloads
21 (5 by Unilu)

Bibliography


Similar publications



Contact ORBilu