Keywords :
Robotics, Compuer Vision, Sensor Fusion, Calibration, Localization
Abstract :
[en] Calibration is an indispensable prerequisite for a wide range of multi-sensor fusion applications. In this thesis, the main focus is on vision-based multi-sensor system. Cameras have the favorable characteristics of low cost, small size, and low power consumption. Moreover, cameras can perceive rich semantic information in the environment. These appealing advantages have made cameras popular in robotics, AR/VR, and planetary exploration. Thus, visual localization becomes ubiquitous. In order to evaluate the accuracy of visual localization, an additional sensor with higher reference precision needs to be introduced. Meanwhile, to improve the accuracy and robustness of visual localization, cameras are usually tightly coupled with other complementary sensors. These demands require solving the problem of multi-sensor calibration, which is the core topic of this thesis. The first part of this thesis considers the calibration problem in vision-based relative localization applications. The focus is on how to use a high-precision global pose sensor to evaluate visual localization accuracy. To achieve this goal, the spatial-temporal calibration parameters between the camera and the global pose sensor must be calibrated. Two novel calibration algorithms are proposed, including target-based and target-less methods. The principles of these two methods can be applied to any calibration task. The second part of this thesis investigates the calibration problem in multi-sensor localization with the participation of GPS. More specifically, the observability issue is addressed for the GPS-VIO system. The existing analysis based on linear observability theory points out that the rotational extrinsic parameters between GPS and VIO is unobservable. However, experiments indicate that this is not true. In order to address the discrepancy between theory and experiment, a novel nonlinear observability analysis is proposed, highlighting the theoretical contribution of this research. The third part of this thesis revisits the online extrinsic calibration for the VIO system. For the common-seen and fundamental pure translational straight line motion, an issue with respect to the existing observability conclusion is identified. Contrary to the existing conclusion, novel proof shows that this motion can lead to the unobservability of the rotational extrinsic parameter between IMU and camera (at least one degree of freedom). By correcting the existing conclusion, this novel theoretical finding disseminates more precise principle to the research community and provides explainable calibration guideline for practitioners. Lastly, this thesis advances the calibration efficiency for IMU-Camera system. Most existing offline target-based calibration algorithms adopt the continuous-time state representation based on the B-spline. Although these methods can accurately calibrate spatial-temporal parameters, they suffer from high computational costs. To address this limitation, an extremely efficient calibration algorithm that unleashes the power of discrete-time state representation is designed, which achieves up to 1000x speedup compared to the most popular calibration toolbox (Kalibr). Overall, this thesis deepens the calibration research for vision-based multi-sensor systems. These investigations provide more solid foundations for the state estimation of multi-sensor systems, from the perspective of system development and theoretical support.
Institution :
Unilu - University of Luxembourg [Interdisciplinary Centre for Security, Reliability and Trust (SNT)], Luxembourg, Luxembourg