A novel
approach for visual control of wheeled mobile robots has been proposed,
extending the existing works that use the trifocal tensor as
source for measurements. In this approach, singularities typically
encountered in this kind of methods are removed by formulating the
control problem based on the trifocal tensor and by using a virtual
target vertical translated from the real target.
A
single controller able to regulate the robot pose towards the desired
configuration without local minima is designed. Additionally, the
proposed approach is valid for perspective cameras as well as
catadioptric systems obeying a central camera model.
H. M. Becerra, J.B. Hayet and C. Sagüés, "A Single Visual‐Servo Controller
of Mobile Robots with Super‐Twisting Control", Robotics
and Autonomous Systems, accepted, March 2014.
We
propose a new visual servoing scheme based on pose-estimation to drive
mobile robots to a desired location specified by a target image. Our
scheme recovers the camera
location (position and orientation) using an Extended Kalman Filter
(EKF) algorithm with the 1D trifocal tensor (TT) or the epipolar geometry as measurement model.
A state-estimated feedback control law is
designed to solve a tracking problem for the lateral and longitudinal
robot coordinates. The desired trajectories to be tracked ensure total
correction of both position and orientation using a single step control
law, although the orientation is a DOF in the control system.
The interest of this approach is that the new visual servoing scheme allows knowing the real world path
performed by the robot without the computational load introduced by
position-based approaches. The
effectiveness of our approach is tested via simulations and real-world experiments using a hypercatadioptric imaging system.
1. Simulation of the pose-estimation-based control.
2. Real-world experiment of the pose-estimation-based control.
We
have
introduced an image-based approach to perform visual control for
wheeled mobile robots using for the first time the elements of the
1D trifocal tensor directly in the control law. The visual control
utilizes the usual teach-by-showing strategy without requiring any a
prior knowledge of the scene and does not need any auxiliary image.
The main contribution of this work is that the
proposed two-steps control law ensures total correction of both
position and orientation without switching to any other visual
constraint rather than the 1D trifocal tensor. Our approach can be
applied with any visual sensor obeying approximately a central
projection model, presents good robustness to image noise, and avoids
the problem of short baseline by exploiting the information of three
views. We exploit the properties of omnidirectional images to preserve
bearing information by using the 1D trifocal tensor. The control law
exploits the
sliding mode control technique in a square system, ensuring stability
and robustness for the closed loop.
1. Simulation of the trifocal tensor-based control.
2. Real-world experiment of the trifocal tensor-based control.
We
have proposed a novel control law based on sliding mode theory in order to
perform visual servoing for mobile robots. The control law exploits the
epipolar geometry extended to three views on the basis of image-based
visual servoing. The control strategy is performed in two steps. The
first step achieves total correction of both orientation and lateral
error, while in a second step depth correction is performed.
Our main contributions on this topic are, 1) the control
law performs orientation, lateral error and depth correction with no
auxiliary images and it is achieved without necessity of switching to
any other visual constraint rather than epipolar geometry, 2) the
control law deals with singularities induced by the epipolar geometry
in the first step maintaining always bounded inputs, and 3) it is the
first time that a robust control law is proposed to face uncertainty in
parameters of the vision system, in such a way that our approach does
not need a precise camera calibration. Moreover, the target location is
reached even with noise in the image.
1. Simulation of the robust epipolar-based control.
2. Real-world experiment of the robust epipolar-based control.