![]() Kabiri, Meisam ![]() ![]() ![]() in Sensors (2022), 23(1), 188 Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy ... [more ▼] Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities to enhance the localisation of UAVs and UGVs. In this paper, we review radio frequency (RF)-based approaches to localisation. We review the RF features that can be utilized for localisation and investigate the current methods suitable for Unmanned Vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localisation for both UAVs and UGVs is examined, and the envisioned 5G NR for localisation enhancement, and the future research direction are explored. [less ▲] Detailed reference viewed: 39 (9 UL)![]() Cazzato, Dario ![]() ![]() ![]() in Journal of Imaging (2020), 6(8), 78 The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns ... [more ▼] The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed. [less ▲] Detailed reference viewed: 144 (7 UL)![]() ; ; Cimarelli, Claudio ![]() in Applied Sciences (2020), 10(13), 4548 Soft biometrics provide information about the individual but without the distinctiveness and permanence able to discriminate between any two individuals. Since the gaze represents one of the most ... [more ▼] Soft biometrics provide information about the individual but without the distinctiveness and permanence able to discriminate between any two individuals. Since the gaze represents one of the most investigated human traits, works evaluating the feasibility of considering it as a possible additional soft biometric trait have been recently appeared in the literature. Unfortunately, there is a lack of systematic studies on clinically approved stimuli to provide evidence of the correlation between exploratory paths and individual identities in “natural” scenarios (without calibration, imposed constraints, wearable tools). To overcome these drawbacks, this paper analyzes gaze patterns by using a computer vision based pipeline in order to prove the correlation between visual exploration and user identity. This correlation is robustly computed in a free exploration scenario, not biased by wearable devices nor constrained to a prior personalized calibration. Provided stimuli have been designed by clinical experts and then they allow better analysis of human exploration behaviors. In addition, the paper introduces a novel public dataset that provides, for the first time, images framing the faces of the involved subjects instead of only their gaze tracks. [less ▲] Detailed reference viewed: 62 (3 UL)![]() ; Cimarelli, Claudio ![]() ![]() in 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, Valletta 27-29 February 2020 (2020, February 27) Two typical Unmanned Aerial Vehicles (UAV) countermeasures involve the detection and tracking of the UAV position, as well as of the human pilot; they are of critical importance before taking any ... [more ▼] Two typical Unmanned Aerial Vehicles (UAV) countermeasures involve the detection and tracking of the UAV position, as well as of the human pilot; they are of critical importance before taking any countermeasure, and they already obtained strong attention from national security agencies in different countries. Recent advances in computer vision and artificial intelligence are already proposing many visual detection systems from an operating UAV, but they do not focus on the problem of the detection of the pilot of another approaching unauthorized UAV. In this work, a first attempt of proposing a full autonomous pipeline to process images from a flying UAV to detect the pilot of an unauthorized UAV entering a no-fly zone is introduced. A challenging video sequence has been created flying with a UAV in an urban scenario and it has been used for this preliminary evaluation. Experiments show very encouraging results in terms of recognition, and a complete dataset to evaluate artificial intelligence-based solution will be prepared. [less ▲] Detailed reference viewed: 102 (1 UL)![]() Cimarelli, Claudio ![]() ![]() ![]() in 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (2019, November 25) Robot self-localization is essential for operating autonomously in open environments. When cameras are the main source of information for retrieving the pose, numerous challenges are posed by the presence ... [more ▼] Robot self-localization is essential for operating autonomously in open environments. When cameras are the main source of information for retrieving the pose, numerous challenges are posed by the presence of dynamic objects, due to occlusion and continuous changes in the appearance. Recent research on global localization methods focused on using a single (or multiple) Convolutional Neural Network (CNN) to estimate the 6 Degrees of Freedom (6-DoF) pose directly from a monocular camera image. In contrast with the classical approaches using engineered feature detector, CNNs are usually more robust to environmental changes in light and to occlusions in outdoor scenarios. This paper contains an attempt to empirically demonstrate the ability of CNNs to ignore dynamic elements, such as pedestrians or cars, through learning. For this purpose, we pre-process a dataset for pose localization with an object segmentation network, masking potentially moving objects. Hence, we compare the pose regression CNN trained and/or tested on the set of masked images and the original one. Experimental results show that the performances of the two training approaches are similar, with a slight reduction of the error when hiding occluding objects from the views. [less ▲] Detailed reference viewed: 81 (10 UL)![]() Cimarelli, Claudio ![]() ![]() ![]() in International Conference on Computer Analysis of Images and Patterns (2019, August 22) Precise and robust localization is of fundamental importance for robots required to carry out autonomous tasks. Above all, in the case of Unmanned Aerial Vehicles (UAVs), efficiency and reliability are ... [more ▼] Precise and robust localization is of fundamental importance for robots required to carry out autonomous tasks. Above all, in the case of Unmanned Aerial Vehicles (UAVs), efficiency and reliability are critical aspects in developing solutions for localization due to the limited computational capabilities, payload and power constraints. In this work, we leverage novel research in efficient deep neural architectures for the problem of 6 Degrees of Freedom (6-DoF) pose estimation from single RGB camera images. In particular, we introduce an efficient neural network to jointly regress the position and orientation of the camera with respect to the navigation environment. Experimental results show that the proposed network is capable of retaining similar results with respect to the most popular state of the art methods while being smaller and with lower latency, which are fundamental aspects for real-time robotics applications. [less ▲] Detailed reference viewed: 129 (7 UL)![]() Cazzato, Dario ![]() ![]() ![]() in Proceedings of the 2019 3rd International Conference on Artificial Intelligence and Virtual Reality (2019, July) The ability of the robots to imitate human movements has been an active research study since the dawn of the robotics. Obtaining a realistic imitation is essential in terms of perceived quality in human ... [more ▼] The ability of the robots to imitate human movements has been an active research study since the dawn of the robotics. Obtaining a realistic imitation is essential in terms of perceived quality in human-robot interaction, but it is still a challenge due to the lack of effective mapping between human movements and the degrees of freedom of robotics systems. If high-level programming interfaces, software and simulation tools simplified robot programming, there is still a strong gap between robot control and natural user interfaces. In this paper, a system to reproduce on a robot the head movements of a user in the field of view of a consumer camera is presented. The system recognizes the presence of a user and its head pose in real-time by using a deep neural network, in order to extract head position angles and to command the robot head movements consequently, obtaining a realistic imitation. At the same time, the system represents a natural user interface to control the Aldebaran NAO and Pepper humanoid robots with the head movements, with applications in human-robot interaction. [less ▲] Detailed reference viewed: 196 (18 UL)![]() Cazzato, Dario ![]() in AIVR 2019: Proceedings of the 2019 3rd International Conference on Artificial Intelligence and Virtual Reality (2019) Detailed reference viewed: 105 (10 UL)![]() ; Voos, Holger ![]() Report (2017) This is the qualification document for the new team, and first team of Luxembourg, called “Luxembourg United”. This is also the team’s first attempt to qualify for a RoboCup competition. First, a general ... [more ▼] This is the qualification document for the new team, and first team of Luxembourg, called “Luxembourg United”. This is also the team’s first attempt to qualify for a RoboCup competition. First, a general description of the Luxembourg United team is presented, along with its members and the equipment owned by the team. Second, the mixed team option is considered and potential eventualities suggested. Third, the Luxembourg United team thankfully acknowledges its use of the B-Human code, and lists its original contributions to this code as well as to RoboCup 2017. Fourth, The activities of the team that contribute to Luxembourg United are outlined. Fifth, the impact of the team’s participation and research in RoboCup is described as it applies to the SPL community, to Luxembourg University and SnT Research Center, and to the whole country of Luxembourg. We conclude this document with considerations pertaining to the path that brought us to RoboCup, and present futur perspectives. [less ▲] Detailed reference viewed: 102 (1 UL) |
||