Article (Scientific journals)
VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems
Bavle, Hriday; Puente, P. De La; How, J. P. et al.
2020In IEEE Access, 8, p. 60704-60718
Peer Reviewed verified by ORBi
 

Files


Full Text
09045978.pdf
Publisher postprint (4.27 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
aerospace robotics;distance measurement;feature extraction;graph theory;mobile robots;object detection;pose estimation;robot vision;SLAM (robots);standard RGB-D dataset;state of the art object detectors;graph-based approach;visual-inertial odometry;low-level visual odometry;lightweight visual semantic SLAM framework;sparse semantic map;complete 6DoF pose;detected semantic objects;planar surfaces;geometrical information;board aerial robotic platforms;real-time visual semantic SLAM framework;pose estimate;high-level semantic information;indoor environments;aerial robotic systems;visual planar semantic SLAM;VPS-SLAM;Semantics;Simultaneous localization and mapping;Three-dimensional displays;Detectors;Visualization;Data mining;SLAM;visual SLAM;visual semantic SLAM;autonomous aerial robots;UAVs
Abstract :
[en] Indoor environments have abundant presence of high-level semantic information which can provide a better understanding of the environment for robots to improve the uncertainty in their pose estimate. Although semantic information has proved to be useful, there are several challenges faced by the research community to accurately perceive, extract and utilize such semantic information from the environment. In order to address these challenges, in this paper we present a lightweight and real-time visual semantic SLAM framework running on board aerial robotic platforms. This novel method combines low-level visual/visual-inertial odometry (VO/VIO) along with geometrical information corresponding to planar surfaces extracted from detected semantic objects. Extracting the planar surfaces from selected semantic objects provides enhanced robustness and makes it possible to precisely improve the metric estimates rapidly, simultaneously generalizing to several object instances irrespective of their shape and size. Our graph-based approach can integrate several state of the art VO/VIO algorithms along with the state of the art object detectors in order to estimate the complete 6DoF pose of the robot while simultaneously creating a sparse semantic map of the environment. No prior knowledge of the objects is required, which is a significant advantage over other works. We test our approach on a standard RGB-D dataset comparing its performance with the state of the art SLAM algorithms. We also perform several challenging indoor experiments validating our approach in presence of distinct environmental conditions and furthermore test it on board an aerial robot. Video:https://vimeo.com/368217703Released Code:https://bitbucket.org/hridaybavle/semantic_slam.git.
Disciplines :
Electrical & electronics engineering
Author, co-author :
Bavle, Hriday  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Automation
Puente, P. De La
How, J. P.
Campoy, P.
External co-authors :
yes
Language :
English
Title :
VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems
Publication date :
2020
Journal title :
IEEE Access
ISSN :
2169-3536
Publisher :
Institute of Electrical and Electronics Engineers, United States - New Jersey
Volume :
8
Pages :
60704-60718
Peer reviewed :
Peer Reviewed verified by ORBi
Available on ORBilu :
since 23 March 2021

Statistics


Number of views
158 (11 by Unilu)
Number of downloads
130 (5 by Unilu)

Scopus citations®
 
67
Scopus citations®
without self-citations
63
WoS citations
 
46

Bibliography


Similar publications



Contact ORBilu