Wednesday, 21 March 2007

Grid SLAM

Lately I've been playing with a c# implementation of GridSLAM, with an aim of getting a system that could work with poor sonar data. In simulation this is starting to get good results when the model parameters are tweaked just so.


The basic idea of SLAM (Simultaneous Localization and Mapping) is, as the name implies, to make a map while guesstimating where you are in the map. The output you hope for is a good map and a good estimate of where you are (x,y,theta). The inputs you get are noisy sonar data (range, angle), and poor odometry information (bad x,y,theta from dead reckoning using wheel encoder ticks).

If you were to try to make a map using just the raw data you might get something like this:


In GridSLAM, you try to model the uncertainty of your odometry information and the uncertainty of ranging data and together make an estimate of your state (x,y,theta) using the correspondance between new data and your existing map.






The major tool is called a particle filter: Rather than model just the best guess, you model hundreds of possible states, each with their own map. Each step, each particle is moved according to odometry data plus added gaussian noise, supposedly sufficient to cover the expected range of error in the odometry data.


The image above left shows 1000 particles moving to the right, with a small clockwise rotation. You will notice that the bulk of particles stay near the middle of the distribution. This is due to an essential but dangerous step of the particle filter called re-sampling whose aim is to keep the distribution of particles closely related to the overall probability of being true. It does this by assigning a weight (0 to 1) to each particle based on the probability of the latest move - i.e. how close it was to perfect motion. It then picks particles to be promoted into the next generation with a probability related to the weight: less likely particles die, while more likely particles are duplicated to keep the total number of particles the same.

This is fine if there are enough particles to cover the space, but when there are too few particles, the resampling could favour particles that happened to move well in the last step, but aren't good representations of a longer term movement.


The image on the left shows what could happen when using only 30 particles. The resampling has killed the true distribution by duplicating the 'wrong' particles and ending up in what is called 'particle deprivation' where too many duplicates followed the wrong path and have lost coverage of the model they were trying to approximate.
Without re-sampling (left) the distribution would have been closer to the truth. Unfortunately because GridSLAM updates a map for each particle at each step, with limited memory and processor power, it is difficult to use thousands of particles. 100-200 is more typical, so great care has to applied in choosing when to resample.


Each time a particle is duplicated, part of the history of possible movements is destroyed with the lost particle. In a maze with multiple levels of loops, a small loop can exagerate confidence, loosing the particle diversity needed to later close the large loop.


Much recent research has been done on ways to avoid this problem. Some of which are:
  • Don't re-sample if the distribution is too focused
  • Compress the diversity of weights, so resampling is less vicious
  • Remember previous diversity and revert to it after a small loop

As an exmaple, consider this robot path. It starts at the top left (1), moves to the right and around the first loop (2). Just after (2) it starts to see places it has seen before, so the uncertainty about the movement in the small loop is reduced. In this case it closed the small loop pretty well, and coninues back across the top, past where it started (3) and down to the bottom left(4). From (2) to (3) it was reducing its error, then from (3) to (4) more error is creeping into the distribution in preparation for closing the larger loop at (5).



The particles are shown as a red mess near (5). The graph shows how the Root Mean Square error of the entire particle set away from perfect odometry changes over that time. Just before (5) it is high, and the current aggregate best estimate is somewhat mis-aligned, but recoverable.

A little later, it managed to recover to a decent map, but only because all the model parameters had been tweaked to make this possible. If there was more motion error, less particles or more trust of sensor data, it would have failed spectacularly.

Many of the published papers work well due to massive overlap between the current sensor data and existing map. By using long range laser with 180 data points per scan a fairly accurate match is possible that removes much of the rotational uncertainty. Poorer systems with just a few highly noisy wide-beam, short-range sonars are far less likely to be good at closing large loops. I'm hoping that inserting rotation estimates from image tracking and a digital compass will help aleviate some of these problems.

My next step is to port this code from a c# test app into an MSRS service that it can be evaluated with real world data.

10 comments:

Jason said...

Is this C#-Implementation of GridSLAM available?
I'd be very interested.

Chris Kilner said...

Hi Jason,

Yes, it is fully C#, currently working in a simultion program I wrote that allows you to vary the number, type and position of the sonars and vary all the properties of the beam and cone models on a per sensor basis.

I'm in the process of cleaning it up to release as an MSRS service and may also clean up the app to be a teaching tool, but thought that I should test it in the real world first.

If you contact me privately I'd happily share the code 'as is'.

Jason said...

Hi, Chris,

I've sent a mail to your diversity address.

Thx!

Unknown said...

Hi Chris,
I am writing my master thesis which is about robotic mapping. I am very interested in your program. Is it possible you to share you code. I would be very happy. Thanks

Berkan

Chris Kilner said...

Hi Berkan,

The project is currently on Codeplex in setup mode, meaning that ony administrators can see the code. The part that is dragging is the MSRS implementation, which is not difficult in itself, it is just that I haven't had the time lately to polish it off into something worth presenting (new job). If you had a codeplex username, I could add you as a 'developer' which would give you full access.

Hope this helps,

Anonymous said...

Hi Chris,
I am student in China and I am very interested in your program. Could add me as a 'developer' on Codeplex so that I can share you code. Thank your! My username is:zhujihua

Chris Kilner said...

Hi zhujihua,

It is now out of setup mode and available to all.

Again, alas, I hadn't had time to finish up the code properly, but feel free to clean and extend.

http://www.codeplex.com/GridSlamMSRS

Billias said...

i am working on the same project the past months but with no result.
i am using mobotsim to program the robot, and the hole procedure is quite slow. i am now stuck at a point where i can create accurate maps when no error exists in the odometers, but i don't know what the next step should be.
I am using arrays that store the sensor measurements at any step, and the robots pose at these steps, but don't know what to do next. any clues?

Masters Thesis said...

this kind of blog always useful for blog readers, it helps people during research. your post is one of the same for blog readers.

castello_epul said...

Hi Chris,

your blog is very wonderful for a amateur programmer like me and I`m very happy if u can add me as developer on Codeplex so that I can share you code. Thank your! My username is: sbahri6