Tracking the Absolute Position of a Mobile Robot Using Vision-Based Monte Carlo Localization

Jan Benda, Zbyněk Winkler
Charles University, Faculty of Mathematics and Physics, Department of Software Engineering,
Malostranské nám. 25, 118 00 Praha 1, Czech Republic

Abstract

In this paper, we present a new technique for the localization of a mobile robot on a checkerboard-painted field. In conjunction with additional non-periodic attributes of the environment, the absolute position of the robot may be estimated using only visual data only.

It is known that particle filters are highly efficient and robust in localization tasks. We describe an application of the Monte Carlo Localization (MCL) algorithm on the data acquired using a web-camera, performing real-time absolute localization on a low cost hardware. Since the large field of view is important to avoid lack of traceable features in the camera image, a solution using catadioptric omnidirectional visual sensor is presented.

The operation field for the test robotic platform is the playing table of the Eurobotopen 2005 robotic contest - a rectangle of 2.1 x 3.64 meters consisting of two fields with a brown-beige checkerboard pattern separated by a blue ditch and surrounded by a white border. For each iteration of the algorithm, the camera image is first thresholded to identify these colours. The thresholded image is then passed to the MCL algorithm, which consists of two phases - the prediction and the correction. In the prediction phase, the information of relative motion is applied to a set of samples representing the probability of the robot position. In the correction phase, each of the samples is verified against a fix count of random pixels from the thresholded image: the function representing a transformation from the image plane to the plane of the playing field is applied to the selected pixels to determine the match of the viewpoint of the sample to the current observations. The transformation is pre-calculated and stored in a look-up table to reach high speed of execution. The entire algorithm iterates at the pace of camera frame-rate.

Since our method uses visual data both to determine the relative motion and to verify the current location, an unexpected motion such as wheel slippage or collision can not influence it. Because of the ambiguity of motion prediction on the considered playing field, there is an upper bound to the speed of the robot’s motion, defined by the size of the period of the visual features and the time required to process one camera image.

This new method employs the composition of fast colour thresholding, look-up coordinate transformation, vision-based motion prediction and Monte Carlo Localization to gain robust and reliable absolute localization which is crucial namely in robot competitions when robot has to orient quickly and reliably.

 
  omni/abstract.txt · Last modified: 2005/09/23 11:59
 
Recent changes RSS feed Driven by DokuWiki