Reactive Qualitative Visual Navigation

Type | Technology transfer contract
Duration | -
Project leader Francisco Bonin-Font
Collaborators Gabriel Oliver Codina | Alberto Ortiz Rodriguez

DESCRIPTION

Robotic visual navigation in general indoor and outdoor environments with obstacle avoidance and a special emphasis in reactive control architectures. Ultrasonic and Laser sensors have been traditionally used for obstacle avoidance and navigation in structured and non-structured scenarios. However, the former are only able to provide sparse sets of readings, while the latter are still expensive. Lately, visual solutions have emerged as competitive alternatives because of the low cost of cameras, the richness of the provided sensor data and the larger spatial and temporal resolution available. Sonar based images have arisen as a hybrid between sonar and vision. In these systems, the environment is scanned by a grid of ultrasonic sensors, and the scattered received information is post-processed to produce something close to an image. Sonar images are specially suitable, for example, in underwater imaging, where, in case of turbid waters, images are very noisy and retrieving profitable information from them is very difficult. The different navigation techniques addressed to mobile robots can be roughly divided in map-based and mapless systems. Map-based systems plan routes and their performance, while mapless systems analyze on-line the environment to determine the route to follow. Vision-based approaches proposed throughout the last decades can also be considered to be either map-based or mapless systems [16]. Some of them include the implementation of local occupancy maps that show the presence of obstacles in the vicinity of the robot and try to perform a symbolic view of the surrounding world. These systems can be considered as hybrids because they combine some typical aspects from the map-based systems with some other from the mapless systems. These maps are updated on-line and used to navigate safely. The construction of such local maps entails the computation of range and angle of nearby obstacles in a particularly accurate manner. The visual sonar approaches constitute a different way to calculate and represent range and angle of obstacles in the vicinity of the robot, retrieving and representing the environmental data analogously to sonar, but using a visual sensor. Many of the mapless or local mapping navigation solutions that have been proposed are based on edge computation or on texture segmentation. They are thus vulnerable to the presence of shadows or inter-reflections and they are also vulnerable to textured floors. Other solutions based on optical flow determine the direction of motion of the robot with respect to the environment or vice-versa, the motion of the environment with respect to the robot. They usually estimate the displacement of image features across successive images. This process is high time consuming and requires differentiating, in some way, the optical flow caused by ground points from the optical flow caused by obstacle or wall points. Other vision-based different solutions which compute homographies to match image points in successive images fail in scenarios that generate scenes with multiple planes. Solutions based on feature tracking mostly need to calculate and take into account the robot egomotion. Some road line trackers based on Inverse Perspective Transformation (IPT) need to previously find lines in the image that converge to the vanishing point. Some other IPT-based solutions project the whole image onto the ground, increasing the computational cost. This project presents a new navigation strategy comprising obstacle detection and avoidance. Unlike previous approaches, the one presented in this work avoids back-projecting the whole image, it does not need to compute motion or optical flow, it presents a certain robustness to scenarios with textured floors, shadows or inter-reflections, it overcomes scenes with multiple different planes and it combines a quantitative process with a set of qualitative rules to converge in a robust technique to safely explore unknown environments. The method has been inspired on the visual sonar-based reactive navigation algorithms and implements the Vector Field Histogram method [17], but here adapted for a vision sensor. The algorithm runs in five steps: 1) first, image main features are detected, tracked across consecutive frames, and classified as either obstacle or ground, using a new algorithm based on IPT; 2) the edge map of the processed frames is computed, and edges comprising obstacle points are discriminated from the rest, emphasizing the obstacle boundaries; 3) range and angle of obstacles located inside a Region of Interest (ROI), centered on the robot and with a fixed radius, are estimated computing the orientation and distance of those obstacle points that are in contact with the floor; 4) a qualitative occupancy map is obtained with the data computed in the previous step; and 5) finally, the algorithm computes a vector which steers the robot towards world areas free of obstacles.

PUBLICATIONS

F. Bonin-Font, A. Ortiz, G. Oliver. A Novel Image Feature Classifier based on Inverse Perspective Transformation. 2008.

F. Bonin-Font, A. Ortiz, G. Oliver. A Novel Inverse Perspective Transformation-based Reactive Navigation Strategy. In European Conference on Mobile Robots (ECMR), Mlini/Dubrovnik (Croatia), 2009.

F. Bonin-Font, A. Ortiz, G. Oliver. Experimental Assessment of Different Feature Tracking Strategies for an IPT-based Navigation Task. In IFAC Symposium on Intelligent Autonomous Vehicles (IAV), Lecce (Italy), 2010.

F. Bonin-Font, A. Ortiz, G. Oliver. A novel Vision-Based Reactive Navigation Strategy Based on Inverse Perspective Transformation. In IEEE/IFAC International Conference on Informatics in Control, Automation and Robotics (ICINCO), Milan (Italy), 2009.

F. Bonin-Font, A. Ortiz. Building a Qualitative Local Occupancy Grid in a new Vision-based Reactive Navigation Strategy for Mobile Robots. In IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Palma de Mallorca (Spain), 2009.

F. Bonin-Font, A. Ortiz, G. Oliver. A Visual Navigation Strategy Based on Inverse Perspective Transformation. In Robot Vision, InTech, Ales Ude, pp. 61 – 84, 2010.

F. Bonin-Font, A. Ortiz, G. Oliver. Visual Navigation for Mobile Robots: a Survey. In Journal of Intelligent & Robotic Systems, vol. 53, no. 3, pp. 263--296, 2008 .


Uso de cookies

Este sitio web utiliza cookies para que usted tenga la mejor experiencia de usuario. Si continúa navegando está dando su consentimiento para la aceptación de las mencionadas cookies y la aceptación de nuestra política de cookies, pinche el enlace para mayor información.

ACEPTAR