Reference : VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems
Scientific journals : Article
Engineering, computing & technology : Electrical & electronics engineering
http://hdl.handle.net/10993/46658
VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems
English
Bavle, Hriday mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation]
Puente, P. De La [> >]
How, J. P. [> >]
Campoy, P. [> >]
2020
IEEE Access
8
60704-60718
Yes (verified by ORBilu)
International
2169-3536
[en] aerospace robotics;distance measurement;feature extraction;graph theory;mobile robots;object detection;pose estimation;robot vision;SLAM (robots);standard RGB-D dataset;state of the art object detectors;graph-based approach;visual-inertial odometry;low-level visual odometry;lightweight visual semantic SLAM framework;sparse semantic map;complete 6DoF pose;detected semantic objects;planar surfaces;geometrical information;board aerial robotic platforms;real-time visual semantic SLAM framework;pose estimate;high-level semantic information;indoor environments;aerial robotic systems;visual planar semantic SLAM;VPS-SLAM;Semantics;Simultaneous localization and mapping;Three-dimensional displays;Detectors;Visualization;Data mining;SLAM;visual SLAM;visual semantic SLAM;autonomous aerial robots;UAVs
[en] Indoor environments have abundant presence of high-level semantic information which can provide a better understanding of the environment for robots to improve the uncertainty in their pose estimate. Although semantic information has proved to be useful, there are several challenges faced by the research community to accurately perceive, extract and utilize such semantic information from the environment. In order to address these challenges, in this paper we present a lightweight and real-time visual semantic SLAM framework running on board aerial robotic platforms. This novel method combines low-level visual/visual-inertial odometry (VO/VIO) along with geometrical information corresponding to planar surfaces extracted from detected semantic objects. Extracting the planar surfaces from selected semantic objects provides enhanced robustness and makes it possible to precisely improve the metric estimates rapidly, simultaneously generalizing to several object instances irrespective of their shape and size. Our graph-based approach can integrate several state of the art VO/VIO algorithms along with the state of the art object detectors in order to estimate the complete 6DoF pose of the robot while simultaneously creating a sparse semantic map of the environment. No prior knowledge of the objects is required, which is a significant advantage over other works. We test our approach on a standard RGB-D dataset comparing its performance with the state of the art SLAM algorithms. We also perform several challenging indoor experiments validating our approach in presence of distinct environmental conditions and furthermore test it on board an aerial robot. Video:https://vimeo.com/368217703Released Code:https://bitbucket.org/hridaybavle/semantic_slam.git.
http://hdl.handle.net/10993/46658
10.1109/ACCESS.2020.2983121

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
09045978.pdfPublisher postprint4.17 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.