Simultaneous localization and mapping (SLAM); Local optimizations; Loop closure; Scene-graphs; Optimization; Semantics; Robots; Real-time systems
Abstract :
[en] The hierarchical nature of 3D scene graphs aligns well with the structure of man-made environments, making them highly suitable for representation purposes. Beyond this, however, their embedded semantics and geometry could also be leveraged to improve the efficiency of map and pose optimization, an opportunity that has been largely overlooked by existing methods. We introduce Situational Graphs 2.0 (S-Graphs 2.0), that effectively uses the hierarchical structure of indoor scenes for efficient data management and optimization. Our approach builds a four-layer situational graph comprising Keyframes, Walls, Rooms, and Floors. Our first contribution lies in the front-end, which includes a floor detection module capable of identifying stairways and assigning floor-level semantic relations to the underlying layers (Keyframes, Walls, and Rooms). Floor-level semantics allows us to propose a floor-based loop closure strategy, that effectively rejects false positive closures that typically appear due to aliasing between different floors of a building. Our second novelty lies in leveraging our representation hierarchy in the optimization. Our proposal consists of: (1) local optimization over a window of recent keyframes and their connected components across the four representation layers, (2) floor-level global optimization, which focuses only on keyframes and their connections within the current floor during loop closures, and (3) room-level local optimization, marginalizing redundant keyframes that share observations within the room, which reduces the computational footprint. We validate our algorithm extensively in different real multi-floor environments. Our approach shows state-of-the-art accuracy metrics in large-scale multi-floor environments, estimating hierarchical representations up to 10x faster, in average, than competing baselines.
Disciplines :
Computer science
Author, co-author :
BAVLE, Hriday ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust > Automation > Team Holger VOOS
Sanchez-Lopez, Jose Luis ; University of Luxembourg, Automation and Robotics Research Group, Interdisciplinary Centre for Security, Reliability and Trust, Esch-sur-Alzette, Luxembourg
SHAHEER, Muhammad ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation
Civera, Javier ; Universidad de Zaragoza, I3A, Zaragoza, Spain
VOOS, Holger ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation
External co-authors :
yes
Language :
English
Title :
S-Graphs 2.0 – A Hierarchical-Semantic Optimization and Loop Closure for SLAM
Publication date :
09 October 2025
Journal title :
IEEE Robotics and Automation Letters
eISSN :
2377-3766
Publisher :
Institute of Electrical and Electronics Engineers Inc.
Spanish Government Luxembourg National Research Fund
Funding text :
Received 28 February 2025; accepted 19 September 2025. Date of publication 9 October 2025; date of current version 28 October 2025. This article was recommended for publication by Associate Editor N. Bellotto and Editor S. Behnke upon evaluation of the reviewers\u2019 comments. This work was supported in part by Spanish Government under Grant PID2021-127685NB-I00 and Grant TED2021-131150B-I00, and in part by Luxembourg National Research Fund (FNR), through DEUS Project, under Ref. C22/IS/17387634/DEUS. (Corresponding author: Muhammad Shaheer.) Hriday Bavle, Jose Luis Sanchez-Lopez, and Muhammad Shaheer are with the Automation and Robotics Research Group, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg, 4365 Esch-sur-Alzette, Luxembourg (e-mail: hriday.bavle@uni.lu; joseluis.sanchezlopez@uni.lu; muhammad.shaheer@uni.lu).This work was supported in part by Spanish Government under Grant PID2021-127685NB-I00 and Grant TED2021-131150B-I00, and in part by Luxembourg National Research Fund (FNR), through DEUS Project, under Ref. C22/IS/17387634/DEUS. For the purpose of open access, and in fulfilment of the obligations arising from the grant agreement, the author has applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission.
C. Cadena et al., “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Trans. Robot., vol. 32, no. 6, pp. 1309–1332, Dec. 2016.
N. Hughes et al., “Foundations of spatial perception for robotics: Hierarchical representations and real-time systems,” Int. J. Robot. Res., vol. 43, no. 10, pp. 1457–1505, 2024.
E. Greve, M. Büchner, N. Vödisch, W. Burgard, and A. Valada, “Collaborative dynamic 3D scene graphs for automated driving,” in Proc. IEEE Int. Conf. Robot. Automat., 2024, pp. 11118–11124.
H. Bavle, J. L. Sanchez-Lopez, M. Shaheer, J. Civera, and H. Voos, “S-Graphs++: Real-time localization and mapping leveraging hierarchical representations,” IEEE Robot. Autom. Lett., vol. 8, no. 8, pp. 4927–4934, Aug. 2023.
J. Zhang and S. Singh, “LOAM: LiDAR odometry and mapping in real-time,” in Proc. Robot.: Sci. Syst., vol. 2, no. 9, 2014, pp. 1–9.
T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 5135–5142.
Z. Liu and F. Zhang, “BALM: Bundle adjustment for LiDAR mapping,” IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 3184–3191, Apr. 2021.
W. Xu and F. Zhang, “FAST-LIO: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter,” IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 3317–3324, Apr. 2021.
W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast direct LiDAR-inertial odometry,” IEEE Trans. Robot., vol. 38, no. 4, pp. 2053–2073, Aug. 2022.
T. Shan and B. Englot, “LeGO-LOAM: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 4758–4765.
L. Li et al., “SA-LOAM: Semantic-aided LiDAR SLAM with loop closure,” in Proc. IEEE Int. Conf. Robot. Automat., 2021, pp. 7627–7634.
R. Dubé et al., “SegMap: Segment-based mapping and localization using data-driven descriptors,” Int. J. Robot. Res., vol. 39, no. 2-3, pp. 339–355, Jul. 2019.
X. Chen, A. Milioto, E. Palazzolo, P. Giguère, J. Behley, and C. Stachniss, “SuMa++: Efficient LiDAR-based semantic SLAM,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 4530–4537.
G. Grisetti, R. Kümmerle, C. Stachniss, U. Frese, and C. Hertzberg, “Hierarchical optimization on manifolds for online 2D and 3D mapping,” in Proc. IEEE Int. Conf. Robot. Automat., 2010, pp. 273–278.
H. Kretzschmar and C. Stachniss, “Information-theoretic compression of pose graphs for laser-based SLAM,” Int. J. Robot. Res., vol. 31, no. 11, pp. 1219–1230, 2012.
D. Droeschel and S. Behnke, “Efficient continuous-time SLAM for 3D LiDAR-based online mapping,” in Proc. IEEE Int. Conf. Robot. Automat., 2018, pp. 5000–5007.
K. Koide, M. Yokozuka, S. Oishi, and A. Banno, “Globally consistent and tightly coupled 3D LiDAR inertial mapping,” in Proc. Int. Conf. Robot. Automat., 2022, pp. 5622–5628.
W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure in 2D LiDAR SLAM,” in Proc. IEEE Int. Conf. Robot. Automat., 2016, pp. 1271–1278.
I. Armeni et al., “3D scene graph: A structure for unified semantics, 3D space, and camera,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 5663–5672.
J. Wald, H. Dhamo, N. Navab, and F. Tombari, “Learning 3D semantic scene graphs from 3D indoor reconstructions,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3960–3969.
U.-H. Kim, J.-M. Park, T.-J. Song, and J.-H. Kim, “3-D scene graph: A sparse and semantic representation of physical environments for intelligent agents,” IEEE Trans. Cybern., vol. 50, no. 12, pp. 4921–4933, Dec. 2020.
S.-C. Wu, J. Wald, K. Tateno, N. Navab, and F. Tombari, “Scenegraphfusion: Incremental 3D scene graph prediction from RGB-D sequences,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 7511–7521.
A. Rosinol, A. Gupta, M. Abate, J. Shi, and L. Carlone, “3D dynamic scene graphs: Actionable spatial perception with places, objects, and humans,” Robot.: Sci. Syst., 2020.
H. Bavle, J. L. Sanchez-Lopez, M. Shaheer, J. Civera, and H. Voos, “Situational graphs for robot navigation in structured indoor environments,” IEEE Robot. Autom. Lett., vol. 7, no. 4, pp. 9107–9114, Oct. 2022.
J. A. Millan-Romera, H. Bavle, M. Shaheer, M. R. Oswald, H. Voos, and J. L. Sanchez-Lopez, “Learning high-level semantic-relational concepts for SLAM,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2024, pp. 9803–9810.
C. Kassab, M. Mattamala, L. Zhang, and M. Fallon, “Language-extended indoor SLAM (LEXIS): A versatile system for real-time visual scene understanding,” in Proc. IEEE Int. Conf. Robot. Automat., 2024, pp. 15988–15 994.
C. Campos, R. Elvira, J. J. G. Rodrıǵuez, J. M. M. Montiel, and J. D. Tardós, “ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM,” IEEE Trans. Robot., vol. 37, no. 6, pp. 1874–1890, Dec. 2021.
L. Zhang et al., “Hilti-Oxford dataset: A millimeter-accurate benchmark for simultaneous localization and mapping,” IEEE Robot. Autom. Lett., vol. 8, no. 1, pp. 408–415, Jan. 2023.
K. Koide, J. Miura, and E. Menegatti, “A portable three-dimensional LiDAR-based system for long-term and wide-area people behavior measurement,” Int. J. Adv. Robotic Syst., vol. 16, no. 2, 2019, Art. no. 1729881419841532.
G. Kim, S. Choi, and A. Kim, “Scan context++: Structural place recognition robust to rotation and lateral variations in urban environments,” IEEE Trans. Robot., vol. 38, no. 3, pp. 1856–1874, Jun. 2022.