Abstract
Location of pedestrian in indoor environment remains an open problem.
Current locomotion algorithms in structured (indoor) 3D environments require an accurate localization. The
several and diverse sensors typically embedded on legged robots (IMU, coders, vision and/or LIDARS) should make it possible
if properly fused. Yet this is a difficult task due to the heterogeneity of these sensors and the real-time requirement of the
control. While previous works were using staggered approaches (odometry at high frequency, sparsely corrected from vision and
LIDAR localization), the recent progress in optimal estimation, in particular in visual-inertial localization, is paving the way to
a holistic fusion. This paper is a contribution in this direction. We propose to quantify how a visual-inertial navigation system
can accurately localize a humanoid robot in a 3D indoor environment tagged with fiducial markers.
We introduce a theoretical contribution strengthening the formulation of Forster's IMU pre-integration, a practical contribution to avoid possible
ambiguity raised by pose estimation of fiducial markers, and an experimental contribution on a humanoid dataset with ground
truth. Our system is able to localize the robot with less than 2 cm errors once the environment is properly mapped. This would
naturally extend to additional measurements corresponding to leg odometry (kinematic factors) thanks to the genericity of the
proposed pre-integration algebra.
My Contributions to this work
This work was led by Médéric Fourmy as PhD student at LAAS-CNRS. This work is the continuity of my own thesis. As a result, my contributions to
his work have been realised during my post-doctorate position. I helped him understand the Preintegration theory that I developped during my thesis as well
as the development that were made in the WOLF framework. I also had the pleasure to develop the fiducial markers based SLAM that he used in this paper.
Publication
IEEE-RAS International Conference on Humanoid Robots