MINES ParisTech CAS - Centre automatique et systèmes

Rotational and translational bias estimation based on depth and image measurements

Authors: N. Zarrouati, P. Rouchon, K. Beauchard, IEEE Conference on Decision and Control (2012), pp. 6627- 6634, 10-13 Dec. 2012, Hawai, USA
Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static environment by embedded camera and depth sensor. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting observer is governed by a nonlinear integro/partial differential system whose inputs are the linear/angular velocities and the brightness/depth fields. Convergence analysis is investigated under C3 regularity assumptions on the object motion and its environment. Technically, it relies on Ascoli-Arzela theorem and pre-compacity of the observer trajectories. It ensures asymptotic convergence of the estimated brightness and depth fields. Convergence of the estimated biases is characterized by constraints depending only on the environment. We conjecture that these constraints are automatically satisfied when the environment does not admit any rotational symmetry axis. Such asymptotic observers can be adapted to any realistic camera model. Preliminary simulations with synthetic image and depth data (corrupted by noise around 10%) indicate that such Lyapunov-based observers converge for much weaker regularity assumptions.
Download PDF
BibTeX:
@Proceedings{,
author = {N. Zarrouati, P. Rouchon, K. Beauchard},
editor = {},
title = {Rotational and translational bias estimation based on depth and image measurements},
booktitle = {IEEE Conference on Decision and Control (2012)},
volume = {},
publisher = {},
address = {Hawai},
pages = {6627- 6634},
year = {2012},
abstract = {Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static environment by embedded camera and depth sensor. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting observer is governed by a nonlinear integro/partial differential system whose inputs are the linear/angular velocities and the brightness/depth fields. Convergence analysis is investigated under C3 regularity assumptions on the object motion and its environment. Technically, it relies on Ascoli-Arzela theorem and pre-compacity of the observer trajectories. It ensures asymptotic convergence of the estimated brightness and depth fields. Convergence of the estimated biases is characterized by constraints depending only on the environment. We conjecture that these constraints are automatically satisfied when the environment does not admit any rotational symmetry axis. Such asymptotic observers can be adapted to any realistic camera model. Preliminary simulations with synthetic image and depth data (corrupted by noise around 10%) indicate that such Lyapunov-based observers converge for much weaker regularity assumptions.},
keywords = {Adaptation models, Brightness, Cameras, Convergence, Equations, Observers, Vectors}}