![]() Kabiri, Meisam ![]() ![]() ![]() in Sensors (2022), 23(1), 188 Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy ... [more ▼] Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities to enhance the localisation of UAVs and UGVs. In this paper, we review radio frequency (RF)-based approaches to localisation. We review the RF features that can be utilized for localisation and investigate the current methods suitable for Unmanned Vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localisation for both UAVs and UGVs is examined, and the envisioned 5G NR for localisation enhancement, and the future research direction are explored. [less ▲] Detailed reference viewed: 36 (6 UL)![]() Bavle, Hriday ![]() ![]() ![]() E-print/Working paper (2022) In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated ... [more ▼] In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated measurements and robot poses, and a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between those elements. Our proposed S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging the high-level information of the environment. To extract such high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets including, simulations of distinct indoor environments, on real datasets captured over several construction sites and office environments, and on a real public dataset of indoor office environments. S-Graphs+ outperforms relevant baselines in the majority of the datasets while extending the robot situational awareness by a four-layered scene model. Moreover, we make the algorithm available as a docker file. [less ▲] Detailed reference viewed: 35 (3 UL)![]() Tourani, Ali ![]() ![]() ![]() in Sensors (2022), 22(23), 9297 In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM ... [more ▼] In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM) methods refer to the SLAM approaches that employ cameras for pose estimation and map reconstruction and are preferred over Light Detection And Ranging (LiDAR)-based methods due to their lighter weight, lower acquisition costs, and richer environment representation. Hence, several VSLAM approaches have evolved using different camera types (e.g., monocular or stereo), and have been tested on various datasets (e.g., Technische Universität München (TUM) RGB-D or European Robotics Challenge (EuRoC)) and in different conditions (i.e., indoors and outdoors), and employ multiple methodologies to have a better understanding of their surroundings. The mentioned variations have made this topic popular for researchers and have resulted in various methods. In this regard, the primary intent of this paper is to assimilate the wide range of works in VSLAM and present their recent advances, along with discussing the existing challenges and trends. This survey is worthwhile to give a big picture of the current focuses in robotics and VSLAM fields based on the concentrated resolutions and objectives of the state-of-the-art. This paper provides an in-depth literature survey of fifty impactful articles published in the VSLAMs domain. The mentioned manuscripts have been classified by different characteristics, including the novelty domain, objectives, employed algorithms, and semantic level. The paper also discusses the current trends and contemporary directions of VSLAM techniques that may help researchers investigate them. [less ▲] Detailed reference viewed: 36 (4 UL)![]() Agha, Hakam ![]() ![]() ![]() in Light: Science and Applications (2022), 11(309), 10103841377-022-01002-4 The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of ... [more ▼] The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of opportunities for potentially transformative technologies. The chiral Bragg diffraction resulting from the helical self-assembly of cholesterics becomes omnidirectional in CSRs. This turns them into selective retroreflectors that are exceptionally easy to distinguish— regardless of background—by simple and low-cost machine vision, while at the same time they can be made largely imperceptible to human vision. This allows them to be distributed in human-populated environments, laid out in the form of QR-code-like markers that help robots and Augmented Reality (AR) devices to operate reliably, and to identify items in their surroundings. At the scale of individual CSRs, unpredictable features within each marker turn them into Physical Unclonable Functions (PUFs), of great value for secure authentication. Via the machines reading them, CSR markers can thus act as trustworthy yet unobtrusive links between the physical world (buildings, vehicles, packaging,...) and its digital twin computer representation. This opens opportunities to address pressing challenges in logistics and supply chain management, recycling and the circular economy, sustainable construction of the built environment, and many other fields of individual, societal and commercial importance. [less ▲] Detailed reference viewed: 39 (6 UL)![]() Bavle, Hriday ![]() ![]() ![]() E-print/Working paper (2022) Mobile robots extract information from its environment to understand their current situation to enable intelligent decision making and autonomous task execution. In our previous work, we introduced the ... [more ▼] Mobile robots extract information from its environment to understand their current situation to enable intelligent decision making and autonomous task execution. In our previous work, we introduced the concept of Situation Graphs (S-Graphs) which combines in a single optimizable graph, the robot keyframes and the representation of the environment with geometric, semantic and topological abstractions. Although S-Graphs were built and optimized in real-time and demonstrated state-of-the-art results, they are limited to specific structured environments with specific hand-tuned dimensions of rooms and corridors. In this work, we present an advanced version of the Situational Graphs (S-Graphs+), consisting of the five layered optimizable graph that includes (1) metric layer along with the graph of free-space clusters (2) keyframe layer where the robot poses are registered (3) metric-semantic layer consisting of the extracted planar walls (4) novel rooms layer constraining the extracted planar walls (5) novel floors layer encompassing the rooms within a given floor level. S-Graphs+ demonstrates improved performance over S-Graphs efficiently extracting the room information while simultaneously improving the pose estimate of the robot, thus extending the robots situational awareness in the form of a five layered environmental model. [less ▲] Detailed reference viewed: 5 (1 UL)![]() Bavle, Hriday ![]() ![]() ![]() E-print/Working paper (2022) Detailed reference viewed: 108 (12 UL)![]() Bavle, Hriday ![]() ![]() E-print/Working paper (2021) Detailed reference viewed: 75 (5 UL)![]() Bavle, Hriday ![]() in IEEE Access (2020), 8 Indoor environments have abundant presence of high-level semantic information which can provide a better understanding of the environment for robots to improve the uncertainty in their pose estimate ... [more ▼] Indoor environments have abundant presence of high-level semantic information which can provide a better understanding of the environment for robots to improve the uncertainty in their pose estimate. Although semantic information has proved to be useful, there are several challenges faced by the research community to accurately perceive, extract and utilize such semantic information from the environment. In order to address these challenges, in this paper we present a lightweight and real-time visual semantic SLAM framework running on board aerial robotic platforms. This novel method combines low-level visual/visual-inertial odometry (VO/VIO) along with geometrical information corresponding to planar surfaces extracted from detected semantic objects. Extracting the planar surfaces from selected semantic objects provides enhanced robustness and makes it possible to precisely improve the metric estimates rapidly, simultaneously generalizing to several object instances irrespective of their shape and size. Our graph-based approach can integrate several state of the art VO/VIO algorithms along with the state of the art object detectors in order to estimate the complete 6DoF pose of the robot while simultaneously creating a sparse semantic map of the environment. No prior knowledge of the objects is required, which is a significant advantage over other works. We test our approach on a standard RGB-D dataset comparing its performance with the state of the art SLAM algorithms. We also perform several challenging indoor experiments validating our approach in presence of distinct environmental conditions and furthermore test it on board an aerial robot. Video:https://vimeo.com/368217703Released Code:https://bitbucket.org/hridaybavle/semantic_slam.git. [less ▲] Detailed reference viewed: 99 (5 UL)![]() ; ; et al in International Journal of Advanced Robotic Systems (2020), 17(3), 1--20 A variety of open-source software tools are currently available to help building autonomous mobile robots. These tools have proven their effectiveness in developing different types of robotic systems, but ... [more ▼] A variety of open-source software tools are currently available to help building autonomous mobile robots. These tools have proven their effectiveness in developing different types of robotic systems, but there are still needs related to safety and efficiency that are not sufficiently covered. This article describes recent advances in the Aerostack software framework to address part of these needs, which may become critical in the case of aerial robots. The article describes a software tool that helps to develop the executive system, an important component of the control architecture whose characteristics significantly affect the quality of the final autonomous robotic system. The presented tool uses an original solution for execution control that aims at simplifying mission specification and protecting against errors, considering also the efficiency needs of aerial robots. The effectiveness of the tool was evaluated by building an experimental autonomous robot. The results of the evaluation show that it provides significant benefits about usability and reliability with acceptable development effort and computational cost. The tool is based on Robot Operating System and it is publicly available as part of the last release of the Aerostack software framework (version 3.0). [less ▲] Detailed reference viewed: 38 (0 UL)![]() ; ; Bavle, Hriday ![]() in Frontiers of Information Technology and Electronic Engineering (2019), 20(1), 60--75 Execution control is a critical task of robot architectures which has a deep impact on the quality of the final system. In this study, we describe a general method for execution control, which is a part ... [more ▼] Execution control is a critical task of robot architectures which has a deep impact on the quality of the final system. In this study, we describe a general method for execution control, which is a part of the Aerostack software framework for aerial robotics, and present technical challenges for execution control and design decisions to develop the method. The proposed method has an original design combining a distributed approach for execution control of behaviors (such as situation checking and performance monitoring) and centralizes coordination to ensure consistency of the concurrent execution. We conduct experiments to evaluate the method. The experimental results show that the method is general and usable with acceptable development efforts to efficiently work on different types of aerial missions. The method is supported by standards based on a robot operating system (ROS) contributing to its general use, and an open-source project is integrated in the Aerostack framework. Therefore, its technical details are fully accessible to developers and freely available to be used in the development of new aerial robotic systems. [less ▲] Detailed reference viewed: 54 (0 UL)![]() ; ; Bavle, Hriday ![]() in Journal of Intelligent and Robotic Systems (2019), 95(2), 601--627 Search and Rescue (SAR) missions represent an important challenge in the robotics research field as they usually involve exceedingly variable-nature scenarios which require a high-level of autonomy and ... [more ▼] Search and Rescue (SAR) missions represent an important challenge in the robotics research field as they usually involve exceedingly variable-nature scenarios which require a high-level of autonomy and versatile decision-making capabilities. This challenge becomes even more relevant in the case of aerial robotic platforms owing to their limited payload and computational capabilities. In this paper, we present a fully-autonomous aerial robotic solution, for executing complex SAR missions in unstructured indoor environments. The proposed system is based on the combination of a complete hardware configuration and a flexible system architecture which allows the execution of high-level missions in a fully unsupervised manner (i.e. without human intervention). In order to obtain flexible and versatile behaviors from the proposed aerial robot, several learning-based capabilities have been integrated for target recognition and interaction. The target recognition capability includes a supervised learning classifier based on a computationally-efficient Convolutional Neural Network (CNN) model trained for target/background classification, while the capability to interact with the target for rescue operations introduces a novel Image-Based Visual Servoing (IBVS) algorithm which integrates a recent deep reinforcement learning method named Deep Deterministic Policy Gradients (DDPG). In order to train the aerial robot for performing IBVS tasks, a reinforcement learning framework has been developed, which integrates a deep reinforcement learning agent (e.g. DDPG) with a Gazebo-based simulator for aerial robotics. The proposed system has been validated in a wide range of simulation flights, using Gazebo and PX4 Software-In-The-Loop, and real flights in cluttered indoor environments, demonstrating the versatility of the proposed system in complex SAR missions. [less ▲] Detailed reference viewed: 46 (2 UL)![]() ; ; Bavle, Hriday ![]() in Sensors (2019), 19(21), 1--20 Deep-and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or ... [more ▼] Deep-and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep-and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights). [less ▲] Detailed reference viewed: 36 (1 UL)![]() Bavle, Hriday ![]() ![]() in Aerospace (2018), 5(3), This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor ... [more ▼] This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack. [less ▲] Detailed reference viewed: 155 (9 UL)![]() Bavle, Hriday ![]() Scientific Conference (2018) In this paper we propose a particle filter localization approach, based on stereo visual odometry (VO) and semantic information from indoor environments, for mini-aerial robots. The prediction stage of ... [more ▼] In this paper we propose a particle filter localization approach, based on stereo visual odometry (VO) and semantic information from indoor environments, for mini-aerial robots. The prediction stage of the particle filter is performed using the 3D pose of the aerial robot estimated by the stereo VO algorithm. This predicted 3D pose is updated using inertial as well as semantic measurements. The algorithm processes semantic measurements in two phases; firstly, a pre-trained deep learning (DL) based object detector is used for real time object detections in the RGB spectrum. Secondly, from the corresponding 3D point clouds of the detected objects, we segment their dominant horizontal plane and estimate their relative position, also augmenting a prior map with new detections. The augmented map is then used in order to obtain a drift free pose estimate of the aerial robot. We validate our approach in several real flight experiments where we compare it against ground truth and a state of the art visual SLAM approach. [less ▲] Detailed reference viewed: 38 (0 UL)![]() ; ; Bavle, Hriday ![]() in IEEE International Conference on Intelligent Robots and Systems (2018) Deep learning techniques for motion control have recently been qualitatively improved, since the successful application of Deep Q- Learning to the continuous action domain in Atari-like games. Based on ... [more ▼] Deep learning techniques for motion control have recently been qualitatively improved, since the successful application of Deep Q- Learning to the continuous action domain in Atari-like games. Based on these ideas, Deep Deterministic Policy Gradients (DDPG) algorithm was able to provide impressive results in continuous state and action domains, which are closely linked to most of the robotics-related tasks. In this paper, a vision-based autonomous multirotor landing maneuver on top of a moving platform is presented. The behaviour has been completely learned in simulation without prior human knowledge and by means of deep reinforcement learning techniques. Since the multirotor is controlled in attitude, no high level state estimation is required. The complete behaviour has been trained with continuous action and state spaces, and has provided proper results (landing at a maximum velocity of 2 m/s), Furthermore, it has been validated in a wide variety of conditions, for both simulated and real-flight scenarios, using a low-cost, lightweight and out-of-the-box consumer multirotor. [less ▲] Detailed reference viewed: 54 (2 UL)![]() ; Bavle, Hriday ![]() in International Journal of Micro Air Vehicles (2018), 10(4), 352--361 The lack of redundant attitude sensors represents a considerable yet common vulnerability in many low-cost unmanned aerial vehicles. In addition to the use of attitude sensors, exploiting the horizon as a ... [more ▼] The lack of redundant attitude sensors represents a considerable yet common vulnerability in many low-cost unmanned aerial vehicles. In addition to the use of attitude sensors, exploiting the horizon as a visual reference for attitude control is part of human pilots' training. For this reason, and given the desirable properties of image sensors, quite a lot of research has been conducted proposing the use of vision sensors for horizon detection in order to obtain redundant attitude estimation onboard unmanned aerial vehicles. However, atmospheric and illumination conditions may hinder the operability of visible light image sensors, or even make their use impractical, such as during the night. Thermal infrared image sensors have a much wider range of operation conditions and their price has greatly decreased during the last years, becoming an alternative to visible spectrum sensors in certain operation scenarios. In this paper, two attitude estimation methods are proposed. The first method consists of a novel approach to estimate the line that best fits the horizon in a thermal image. The resulting line is then used to estimate the pitch and roll angles using an infinite horizon line model. The second method uses deep learning to predict attitude angles using raw pixel intensities from a thermal image. For this, a novel Convolutional Neural Network architecture has been trained using measurements from an inertial navigation system. Both methods presented are proven to be valid for redundant attitude estimation, providing RMS errors below 1.7° and running at up to 48 Hz, depending on the chosen method, the input image resolution and the available computational capabilities. [less ▲] Detailed reference viewed: 41 (1 UL)![]() ; Bavle, Hriday ![]() Scientific Conference (2018) Navigation in unknown indoor environments with fast collision avoidance capabilities is an ongoing research topic. Traditional motion planning algorithms rely on precise maps of the environment, where re ... [more ▼] Navigation in unknown indoor environments with fast collision avoidance capabilities is an ongoing research topic. Traditional motion planning algorithms rely on precise maps of the environment, where re-adapting a generated path can be highly demanding in terms of computational cost. In this paper, we present a fast reactive navigation algorithm using Deep Reinforcement Learning applied to multi rotor aerial robots. Taking as input the 2D-laser range measurements and the relative position of the aerial robot with respect to the desired goal, the proposed algorithm is successfully trained in a Gazebo-based simulation scenario by adopting an artificial potential field formulation. A thorough evaluation of the trained agent has been carried out both in simulated and real indoor scenarios, showing the appropriate reactive navigation behavior of the agent in the presence of static and dynamic obstacles. [less ▲] Detailed reference viewed: 44 (1 UL)![]() Sanchez Lopez, Jose Luis ![]() ![]() in Journal of Intelligent and Robotic Systems (2017), 88(2), 638-709 To achieve fully autonomous operation for Unmanned Aerial Systems (UAS) it is necessary to integrate multiple and heterogeneous technical solutions (e.g., control-based methods, computer vision methods ... [more ▼] To achieve fully autonomous operation for Unmanned Aerial Systems (UAS) it is necessary to integrate multiple and heterogeneous technical solutions (e.g., control-based methods, computer vision methods, automated planning, coordination algorithms, etc.). The combination of such methods in an operational system is a technical challenge that requires efficient architectural solutions. In a robotic engineering context, where productivity is important, it is also important to minimize the effort for the development of new systems. As a response to these needs, this paper presents Aerostack, an open-source software framework for the development of aerial robotic systems. This framework facilitates the creation of UAS by providing a set of reusable components specialized in functional tasks of aerial robotics (trajectory planning, self localization, etc.) together with an integration method in a multi-layered cognitive architecture based on five layers: reactive, executive, deliberative, reflective and social. Compared to other software frameworks for UAS, Aerostack can provide higher degrees of autonomy and it is more versatile to be applied to different types of hardware (aerial platforms and sensors) and different types of missions (e.g. multi robot swarm systems). Aerostack has been validated during four years (since February 2013) by its successful use on many research projects, international competitions and public exhibitions. As a representative example of system development, this paper also presents how Aerostack was used to develop a system for a (fictional) fully autonomous indoors search and rescue mission. [less ▲] Detailed reference viewed: 90 (1 UL)![]() ; Bavle, Hriday ![]() in 2017 International Conference on Unmanned Aircraft Systems (ICUAS) (2017, June) Detailed reference viewed: 67 (1 UL)![]() Bavle, Hriday ![]() ![]() in 2017 International Conference on Unmanned Aircraft Systems (ICUAS) (2017, June) Detailed reference viewed: 62 (4 UL) |
||