jump to navigation

my robots

slam

SLAM for an aerial robot

This is the product of my robotics MSc. I worked on a sensor fusion system that, at the same time, improves the localization of the robot and generates a map of the environment. We call that SLAM (Simultaneous Localization and Mapping). Inertial sensors are great and very precise for a short period of time. But as time passes, error integration make the positioning task impossible. On the other hand, while the camera can offer unbiased positioning, with no error integration, its short time precision is small.

In the SLAM system, an Extended Kalman Filter is used to merge the information from the accelerometers and the camera that detects features on the ground. As the airship moves, the accelerometers are used to estimate the position of the airship. When the camera obtains visual information about the environment features, the position of both the features and the airship itself can be perfected by comparing the position of the features at different moments.

simulation

Simulation

This simulation environment was developed to showcase the results of the SLAM system and to offer a visual, intuitive reference on the system’s performance. It was fun to develop and in a way it works like a video game. I created a simple environment, mimicking the real life test location of the airship (AS800) and a model for the airship. A precise simulator of the airship runs offline to generate both the airship position and sensorial information for the slam system (with errors modeled according to the real-life sensors). The SLAM system runs using that data and generates another set of airship positions, reflecting how the system estimated the airship position. The visual side of the simulator is then used to generate a video of that test. In the picture you can see the environment and, at the top left, an image of both the real position of the airship (solid) and the one estimated by the system (red shadow).

nomad

Monocular Visual Servoing and the Homography matrix

I worked on this project during my MSc, as part of a very interesting Computer Vision course. This project was twofold: first, I implemented a Visual Servoing algorithm that specifies how to control the robot, taking it to a different position by comparing  the image obtained by the camera at each frame with a target image taken from the target position prior to the experiment. The second step was to apply a transformation to an initial image to simulate what it would look like if the picture were taken from a different position. For that I used epipolar geometry to calculate a Homography matrix and then transform the image.

The result is a system for the Nomad 200 (in the picture) that uses visual servoing without the need to take a picture from the target position. It operates in a natural environment, thanks to a robust feature tracking algorithm. So instead of giving the system a picture that marks the target, you can give the robot simple commands, like move 1 metter forward and turn 27 degrees clockwise.

khep_controll

Controlling the Khepera

Playing with the Khepera was a lot of fun. In this project I developed another Visual Servoing system, this time in a controlled environment. The computer vision side of the system detected any obstacles in the environment and the robot’s position and orientation based on a specific image pattern. In an empty environment, the robot used Sørdalen’s algorithm for controlling nonholonomic robots, with the results on the left side of the picture. In an environment with obstacles, a pathfinding algorithm was used together with computer vision algorithms to generate a path that takes the khepera to its target position without hitting any obstacles. The robot would then move along the path still using Sørdalen’s approach.

khep_controller

Khepera control software

A Linux application was developed to control the Khepera robot (bottom right of the picture) using the system described above. The image from the aerial camera is displayed in real time by application and through the application interface the user can calibrate the camera or move the robot to any position by clicking the image with the mouse.

Comments

1. Back to the roots: computer vision ftw « gaming me - July 9, 2010

[…] my robots jump to navigation […]


Sorry comments are closed for this entry

%d bloggers like this: