Abstract :
[en] Aerial robots play a vital role in various applications where situational awareness concerning the environment is a fundamental demand. As one such use case, drones in Global Positioning System (GPS)-denied environments require equipping with different sensors that provide reliable sensing results while performing pose estimation and localization. This paper aims to reconstruct maps of indoor environments and generate 3D scene graphs for a high-level representation using a camera mounted on a drone. Accordingly, an aerial robot equipped with a companion computer and an RGB-D camera was employed to be integrated with a Visual Simultaneous Localization and Mapping (VSLAM) framework proposed by the authors. To enhance situational awareness while reconstructing maps, various structural elements, i.e., doors and walls, were labeled with printed fiducial markers, and a dictionary of their topological relations was fed to the system. The system detects markers and reconstructs the map of the indoor areas enriched with higher-level semantic entities, including corridors and rooms. In this regard, integrating VSLAM into the employed drone provides an end-to-end robot application for GPS-denied environments that generates multi-layered vision-based situational graphs containing hierarchical representations. To demonstrate the system's practicality, various real-world condition experiments have been conducted in indoor scenarios with dissimilar structural layouts. Evaluations show the proposed drone application can perform adequately w.r.t. the ground-truth data and its baseline.
Scopus citations®
without self-citations
3